2026-04-30 17:18:13,469 [ 368424 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:42, check_args_and_update_paths) 2026-04-30 17:18:13,469 [ 368424 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:86, check_args_and_update_paths) 2026-04-30 17:18:13,470 [ 368424 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:97, check_args_and_update_paths) 2026-04-30 17:18:13,470 [ 368424 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:99, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_tfur6q --privileged --dns-search='.' --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=2cffe1eae894 -e DOCKER_BASE_TAG=1e0b53d756cf -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e CLICKHOUSE_USE_OLD_ANALYZER=1 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 'test_mysql_database_engine/test.py::test_mysql_types[float_2]' 'test_mysql_database_engine/test.py::test_mysql_types[timestamp_6]' 'test_mysql_database_engine/test.py::test_mysql_types[timestamp_default]' test_mysql_database_engine/test.py::test_password_leak test_mysql_database_engine/test.py::test_predefined_connection_configuration test_non_default_compression/test.py::test_preconfigured_custom_codec test_non_default_compression/test.py::test_preconfigured_default_codec test_non_default_compression/test.py::test_preconfigured_deflateqpl_codec test_non_default_compression/test.py::test_uncompressed_cache_custom_codec test_non_default_compression/test.py::test_uncompressed_cache_plus_zstd_codec test_old_versions/test.py::test_client_is_older_than_server test_old_versions/test.py::test_distributed_query_initiator_is_older_than_shard test_old_versions/test.py::test_server_is_older_than_client test_optimize_on_insert/test.py::test_empty_parts_optimize test_parallel_replicas_failover/test.py::test_skip_replicas_without_table test_parallel_replicas_failover/test.py::test_skip_unresponsive_replicas 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-1000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-10000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-100000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-1000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-10000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-100000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-1000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-10000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-100000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-1000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-10000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-100000-SELECT sum(key) FROM {table_name}]' test_parallel_replicas_skip_shards/test.py::test_error_on_unavailable_shards test_parallel_replicas_skip_shards/test.py::test_skip_unavailable_shards test_polymorphic_parts/test.py::test_compact_parts_only 'test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_compact-Compact]' 'test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_wide-Wide]' 'test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node0-second_node0]' 'test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node1-second_node1]' test_polymorphic_parts/test.py::test_polymorphic_parts_index test_polymorphic_parts/test.py::test_polymorphic_parts_non_adaptive test_postgresql_database_engine/test.py::test_datetime test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl test_postgresql_database_engine/test.py::test_postgres_database_old_syntax test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl test_postgresql_database_engine/test.py::test_postgresql_database_with_schema test_postgresql_database_engine/test.py::test_postgresql_fetch_tables test_postgresql_database_engine/test.py::test_postgresql_password_leak test_postgresql_database_engine/test.py::test_predefined_connection_configuration test_postgresql_replica_database_engine_2/test.py::test_add_new_table_to_replication test_postgresql_replica_database_engine_2/test.py::test_bad_connection_options test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_1 test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_2 test_postgresql_replica_database_engine_2/test.py::test_database_with_single_non_default_schema test_postgresql_replica_database_engine_2/test.py::test_default_columns test_postgresql_replica_database_engine_2/test.py::test_dependent_loading test_postgresql_replica_database_engine_2/test.py::test_failed_load_from_snapshot test_postgresql_replica_database_engine_2/test.py::test_generated_columns test_postgresql_replica_database_engine_2/test.py::test_generated_columns_with_sequence test_postgresql_replica_database_engine_2/test.py::test_materialized_view test_postgresql_replica_database_engine_2/test.py::test_predefined_connection_configuration test_postgresql_replica_database_engine_2/test.py::test_quoting_publication test_postgresql_replica_database_engine_2/test.py::test_remove_table_from_replication test_postgresql_replica_database_engine_2/test.py::test_replica_consumer test_postgresql_replica_database_engine_2/test.py::test_symbols_in_publication_name test_postgresql_replica_database_engine_2/test.py::test_table_override test_postgresql_replica_database_engine_2/test.py::test_toast test_postgresql_replica_database_engine_2/test.py::test_too_many_parts test_profile_events_s3/test.py::test_profile_events test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_client_suggestions_connection test_range_hashed_dictionary_types/test.py::test_range_hashed_dict test_remote_prewhere/test.py::test_remote test_rename_column/test.py::test_rename_distributed test_rename_column/test.py::test_rename_distributed_parallel_insert_and_select test_rename_column/test.py::test_rename_parallel test_rename_column/test.py::test_rename_parallel_same_node test_rename_column/test.py::test_rename_with_parallel_insert test_rename_column/test.py::test_rename_with_parallel_merges test_rename_column/test.py::test_rename_with_parallel_select test_rename_column/test.py::test_rename_with_parallel_slow_insert test_rename_column/test.py::test_rename_with_parallel_ttl_delete test_rename_column/test.py::test_rename_with_parallel_ttl_move test_replicated_database_cluster_groups/test.py::test_cluster_groups test_replicated_table_attach/test.py::test_startup_with_small_bg_pool test_replicated_table_attach/test.py::test_startup_with_small_bg_pool_partitioned test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop test_rocksdb_read_only/test.py::test_read_only test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable test_s3_low_cardinality_right_border/test.py::test_s3_right_border test_s3_low_cardinality_right_border/test.py::test_s3_right_border_2 test_s3_low_cardinality_right_border/test.py::test_s3_right_border_3 test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_no_proxy test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_proxy test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list_no_proxy test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy_no_proxy 'test_search_orphaned_parts/test.py::test_search_orphaned_parts[False]' 'test_search_orphaned_parts/test.py::test_search_orphaned_parts[True]' test_select_access_rights/test_from_system_tables.py::test_information_schema -vvv" altinityinfra/integration-tests-runner:37a9815fd2fa '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: order-1.0.1, random-0.2, timeout-2.2.0, repeat-0.9.3, reportlog-0.4.0, xdist-3.5.0 timeout: 900.0s timeout method: signal timeout func_only: False created: 10/10 workers 10 workers [100 items] scheduling tests via LoadFileScheduling test_non_default_compression/test.py::test_preconfigured_custom_codec test_mysql_database_engine/test.py::test_mysql_types[float_2] test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-1000-SELECT sum(key) FROM {table_name}] test_polymorphic_parts/test.py::test_compact_parts_only test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_no_proxy test_old_versions/test.py::test_client_is_older_than_server test_rename_column/test.py::test_rename_distributed test_s3_low_cardinality_right_border/test.py::test_s3_right_border test_postgresql_database_engine/test.py::test_datetime test_postgresql_replica_database_engine_2/test.py::test_add_new_table_to_replication [gw9] [ 1%] PASSED test_old_versions/test.py::test_client_is_older_than_server test_old_versions/test.py::test_distributed_query_initiator_is_older_than_shard [gw9] [ 2%] PASSED test_old_versions/test.py::test_distributed_query_initiator_is_older_than_shard test_old_versions/test.py::test_server_is_older_than_client [gw9] [ 3%] PASSED test_old_versions/test.py::test_server_is_older_than_client [gw4] [ 4%] PASSED test_postgresql_database_engine/test.py::test_datetime test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays [gw4] [ 5%] PASSED test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables [gw7] [ 6%] PASSED test_s3_low_cardinality_right_border/test.py::test_s3_right_border test_s3_low_cardinality_right_border/test.py::test_s3_right_border_2 test_parallel_replicas_failover/test.py::test_skip_replicas_without_table [gw7] [ 7%] PASSED test_s3_low_cardinality_right_border/test.py::test_s3_right_border_2 test_s3_low_cardinality_right_border/test.py::test_s3_right_border_3 [gw7] [ 8%] PASSED test_s3_low_cardinality_right_border/test.py::test_s3_right_border_3 [gw4] [ 9%] PASSED test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl [gw4] [ 10%] PASSED test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl test_postgresql_database_engine/test.py::test_postgres_database_old_syntax [gw4] [ 11%] PASSED test_postgresql_database_engine/test.py::test_postgres_database_old_syntax test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries [gw4] [ 12%] PASSED test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache [gw6] [ 13%] PASSED test_mysql_database_engine/test.py::test_mysql_types[float_2] test_mysql_database_engine/test.py::test_mysql_types[timestamp_6] test_parallel_replicas_skip_shards/test.py::test_error_on_unavailable_shards [gw5] [ 14%] PASSED test_non_default_compression/test.py::test_preconfigured_custom_codec test_non_default_compression/test.py::test_preconfigured_default_codec [gw6] [ 15%] PASSED test_mysql_database_engine/test.py::test_mysql_types[timestamp_6] test_mysql_database_engine/test.py::test_mysql_types[timestamp_default] [gw8] [ 16%] PASSED test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_no_proxy test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_proxy [gw4] [ 17%] PASSED test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl [gw6] [ 18%] PASSED test_mysql_database_engine/test.py::test_mysql_types[timestamp_default] test_mysql_database_engine/test.py::test_password_leak [gw6] [ 19%] PASSED test_mysql_database_engine/test.py::test_password_leak test_mysql_database_engine/test.py::test_predefined_connection_configuration [gw8] [ 20%] PASSED test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_proxy test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list [gw4] [ 21%] PASSED test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl test_postgresql_database_engine/test.py::test_postgresql_database_with_schema [gw6] [ 22%] PASSED test_mysql_database_engine/test.py::test_predefined_connection_configuration [gw8] [ 23%] PASSED test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list_no_proxy [gw4] [ 24%] PASSED test_postgresql_database_engine/test.py::test_postgresql_database_with_schema test_postgresql_database_engine/test.py::test_postgresql_fetch_tables [gw4] [ 25%] PASSED test_postgresql_database_engine/test.py::test_postgresql_fetch_tables test_postgresql_database_engine/test.py::test_postgresql_password_leak test_search_orphaned_parts/test.py::test_search_orphaned_parts[False] [gw8] [ 26%] PASSED test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list_no_proxy test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy [gw4] [ 27%] PASSED test_postgresql_database_engine/test.py::test_postgresql_password_leak test_postgresql_database_engine/test.py::test_predefined_connection_configuration [gw0] [ 28%] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-1000-SELECT sum(key) FROM {table_name}] test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-10000-SELECT sum(key) FROM {table_name}] [gw7] [ 29%] PASSED test_parallel_replicas_skip_shards/test.py::test_error_on_unavailable_shards test_parallel_replicas_skip_shards/test.py::test_skip_unavailable_shards [gw7] [ 30%] PASSED test_parallel_replicas_skip_shards/test.py::test_skip_unavailable_shards [gw8] [ 31%] PASSED test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy_no_proxy [gw4] [ 32%] PASSED test_postgresql_database_engine/test.py::test_predefined_connection_configuration test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop [gw8] [ 33%] PASSED test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy_no_proxy [gw9] [ 34%] PASSED test_parallel_replicas_failover/test.py::test_skip_replicas_without_table test_parallel_replicas_failover/test.py::test_skip_unresponsive_replicas test_optimize_on_insert/test.py::test_empty_parts_optimize test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_client_suggestions_connection [gw2] [ 35%] PASSED test_postgresql_replica_database_engine_2/test.py::test_add_new_table_to_replication [gw9] [ 36%] PASSED test_parallel_replicas_failover/test.py::test_skip_unresponsive_replicas test_postgresql_replica_database_engine_2/test.py::test_bad_connection_options [gw0] [ 37%] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-10000-SELECT sum(key) FROM {table_name}] test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-100000-SELECT sum(key) FROM {table_name}] [gw6] [ 38%] PASSED test_search_orphaned_parts/test.py::test_search_orphaned_parts[False] test_search_orphaned_parts/test.py::test_search_orphaned_parts[True] [gw2] [ 39%] PASSED test_postgresql_replica_database_engine_2/test.py::test_bad_connection_options test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_1 [gw1] [ 40%] FAILED test_rename_column/test.py::test_rename_distributed test_rename_column/test.py::test_rename_distributed_parallel_insert_and_select test_replicated_table_attach/test.py::test_startup_with_small_bg_pool [gw3] [ 41%] PASSED test_polymorphic_parts/test.py::test_compact_parts_only test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_compact-Compact] [gw4] [ 42%] PASSED test_optimize_on_insert/test.py::test_empty_parts_optimize [gw8] [ 43%] PASSED test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_client_suggestions_connection [gw0] [ 44%] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-100000-SELECT sum(key) FROM {table_name}] test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-1000-SELECT sum(key) FROM {table_name}] [gw3] [ 45%] PASSED test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_compact-Compact] test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_wide-Wide] test_remote_prewhere/test.py::test_remote test_range_hashed_dictionary_types/test.py::test_range_hashed_dict [gw3] [ 46%] PASSED test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_wide-Wide] test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node0-second_node0] [gw7] [ 47%] PASSED test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop test_rocksdb_read_only/test.py::test_read_only [gw9] [ 48%] FAILED test_replicated_table_attach/test.py::test_startup_with_small_bg_pool test_replicated_table_attach/test.py::test_startup_with_small_bg_pool_partitioned [gw9] [ 49%] FAILED test_replicated_table_attach/test.py::test_startup_with_small_bg_pool_partitioned [gw7] [ 50%] PASSED test_rocksdb_read_only/test.py::test_read_only test_replicated_database_cluster_groups/test.py::test_cluster_groups [gw8] [ 51%] PASSED test_remote_prewhere/test.py::test_remote [gw4] [ 52%] PASSED test_range_hashed_dictionary_types/test.py::test_range_hashed_dict [gw0] [ 53%] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-1000-SELECT sum(key) FROM {table_name}] test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-10000-SELECT sum(key) FROM {table_name}] test_select_access_rights/test_from_system_tables.py::test_information_schema test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable [gw6] [ 54%] PASSED test_search_orphaned_parts/test.py::test_search_orphaned_parts[True] test_profile_events_s3/test.py::test_profile_events [gw0] [ 55%] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-10000-SELECT sum(key) FROM {table_name}] test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-100000-SELECT sum(key) FROM {table_name}] [gw8] [ 56%] PASSED test_select_access_rights/test_from_system_tables.py::test_information_schema [gw4] [ 57%] PASSED test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable [gw5] [ 58%] PASSED test_non_default_compression/test.py::test_preconfigured_default_codec test_non_default_compression/test.py::test_preconfigured_deflateqpl_codec [gw2] [ 59%] FAILED test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_1 [gw2] [ 59%] ERROR test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_1 test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_2 [gw2] [ 60%] FAILED test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_2 [gw2] [ 60%] ERROR test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_2 test_postgresql_replica_database_engine_2/test.py::test_database_with_single_non_default_schema [gw2] [ 61%] FAILED test_postgresql_replica_database_engine_2/test.py::test_database_with_single_non_default_schema [gw2] [ 61%] ERROR test_postgresql_replica_database_engine_2/test.py::test_database_with_single_non_default_schema test_postgresql_replica_database_engine_2/test.py::test_default_columns [gw2] [ 62%] FAILED test_postgresql_replica_database_engine_2/test.py::test_default_columns [gw2] [ 62%] ERROR test_postgresql_replica_database_engine_2/test.py::test_default_columns test_postgresql_replica_database_engine_2/test.py::test_dependent_loading [gw2] [ 63%] FAILED test_postgresql_replica_database_engine_2/test.py::test_dependent_loading [gw2] [ 63%] ERROR test_postgresql_replica_database_engine_2/test.py::test_dependent_loading test_postgresql_replica_database_engine_2/test.py::test_failed_load_from_snapshot [gw2] [ 64%] FAILED test_postgresql_replica_database_engine_2/test.py::test_failed_load_from_snapshot [gw2] [ 64%] ERROR test_postgresql_replica_database_engine_2/test.py::test_failed_load_from_snapshot test_postgresql_replica_database_engine_2/test.py::test_generated_columns [gw2] [ 65%] FAILED test_postgresql_replica_database_engine_2/test.py::test_generated_columns [gw2] [ 65%] ERROR test_postgresql_replica_database_engine_2/test.py::test_generated_columns test_postgresql_replica_database_engine_2/test.py::test_generated_columns_with_sequence [gw2] [ 66%] FAILED test_postgresql_replica_database_engine_2/test.py::test_generated_columns_with_sequence [gw2] [ 66%] ERROR test_postgresql_replica_database_engine_2/test.py::test_generated_columns_with_sequence test_postgresql_replica_database_engine_2/test.py::test_materialized_view [gw2] [ 67%] FAILED test_postgresql_replica_database_engine_2/test.py::test_materialized_view [gw2] [ 67%] ERROR test_postgresql_replica_database_engine_2/test.py::test_materialized_view test_postgresql_replica_database_engine_2/test.py::test_predefined_connection_configuration [gw2] [ 68%] FAILED test_postgresql_replica_database_engine_2/test.py::test_predefined_connection_configuration [gw2] [ 68%] ERROR test_postgresql_replica_database_engine_2/test.py::test_predefined_connection_configuration test_postgresql_replica_database_engine_2/test.py::test_quoting_publication [gw2] [ 69%] FAILED test_postgresql_replica_database_engine_2/test.py::test_quoting_publication [gw2] [ 69%] ERROR test_postgresql_replica_database_engine_2/test.py::test_quoting_publication test_postgresql_replica_database_engine_2/test.py::test_remove_table_from_replication [gw2] [ 70%] FAILED test_postgresql_replica_database_engine_2/test.py::test_remove_table_from_replication [gw2] [ 70%] ERROR test_postgresql_replica_database_engine_2/test.py::test_remove_table_from_replication test_postgresql_replica_database_engine_2/test.py::test_replica_consumer [gw2] [ 71%] FAILED test_postgresql_replica_database_engine_2/test.py::test_replica_consumer [gw2] [ 71%] ERROR test_postgresql_replica_database_engine_2/test.py::test_replica_consumer test_postgresql_replica_database_engine_2/test.py::test_symbols_in_publication_name [gw2] [ 72%] FAILED test_postgresql_replica_database_engine_2/test.py::test_symbols_in_publication_name [gw2] [ 72%] ERROR test_postgresql_replica_database_engine_2/test.py::test_symbols_in_publication_name test_postgresql_replica_database_engine_2/test.py::test_table_override [gw2] [ 73%] FAILED test_postgresql_replica_database_engine_2/test.py::test_table_override [gw2] [ 73%] ERROR test_postgresql_replica_database_engine_2/test.py::test_table_override test_postgresql_replica_database_engine_2/test.py::test_toast [gw2] [ 74%] FAILED test_postgresql_replica_database_engine_2/test.py::test_toast [gw2] [ 74%] ERROR test_postgresql_replica_database_engine_2/test.py::test_toast test_postgresql_replica_database_engine_2/test.py::test_too_many_parts [gw1] [ 75%] FAILED test_rename_column/test.py::test_rename_distributed_parallel_insert_and_select test_rename_column/test.py::test_rename_parallel [gw6] [ 76%] PASSED test_profile_events_s3/test.py::test_profile_events [gw1] [ 77%] FAILED test_rename_column/test.py::test_rename_parallel test_rename_column/test.py::test_rename_parallel_same_node [gw3] [ 78%] FAILED test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node0-second_node0] test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node1-second_node1] [gw1] [ 79%] FAILED test_rename_column/test.py::test_rename_parallel_same_node test_rename_column/test.py::test_rename_with_parallel_insert [gw2] [ 80%] PASSED test_postgresql_replica_database_engine_2/test.py::test_too_many_parts [gw2] [ 80%] ERROR test_postgresql_replica_database_engine_2/test.py::test_too_many_parts [gw0] [ 81%] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-100000-SELECT sum(key) FROM {table_name}] test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-1000-SELECT sum(key) FROM {table_name}] [gw7] [ 82%] FAILED test_replicated_database_cluster_groups/test.py::test_cluster_groups [gw0] [ 83%] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-1000-SELECT sum(key) FROM {table_name}] test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-10000-SELECT sum(key) FROM {table_name}] [gw0] [ 84%] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-10000-SELECT sum(key) FROM {table_name}] test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-100000-SELECT sum(key) FROM {table_name}] [gw3] [ 85%] PASSED test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node1-second_node1] test_polymorphic_parts/test.py::test_polymorphic_parts_index [gw3] [ 86%] PASSED test_polymorphic_parts/test.py::test_polymorphic_parts_index test_polymorphic_parts/test.py::test_polymorphic_parts_non_adaptive [gw3] [ 87%] FAILED test_polymorphic_parts/test.py::test_polymorphic_parts_non_adaptive [gw0] [ 88%] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-100000-SELECT sum(key) FROM {table_name}] test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-1000-SELECT sum(key) FROM {table_name}] [gw5] [ 89%] PASSED test_non_default_compression/test.py::test_preconfigured_deflateqpl_codec test_non_default_compression/test.py::test_uncompressed_cache_custom_codec [gw5] [ 90%] PASSED test_non_default_compression/test.py::test_uncompressed_cache_custom_codec test_non_default_compression/test.py::test_uncompressed_cache_plus_zstd_codec [gw5] [ 91%] PASSED test_non_default_compression/test.py::test_uncompressed_cache_plus_zstd_codec [gw0] [ 92%] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-1000-SELECT sum(key) FROM {table_name}] test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-10000-SELECT sum(key) FROM {table_name}] [gw0] [ 93%] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-10000-SELECT sum(key) FROM {table_name}] test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-100000-SELECT sum(key) FROM {table_name}] [gw0] [ 94%] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-100000-SELECT sum(key) FROM {table_name}] [gw1] [ 95%] PASSED test_rename_column/test.py::test_rename_with_parallel_insert test_rename_column/test.py::test_rename_with_parallel_merges [gw1] [ 96%] PASSED test_rename_column/test.py::test_rename_with_parallel_merges test_rename_column/test.py::test_rename_with_parallel_select [gw1] [ 97%] PASSED test_rename_column/test.py::test_rename_with_parallel_select test_rename_column/test.py::test_rename_with_parallel_slow_insert [gw1] [ 98%] PASSED test_rename_column/test.py::test_rename_with_parallel_slow_insert test_rename_column/test.py::test_rename_with_parallel_ttl_delete [gw1] [ 99%] PASSED test_rename_column/test.py::test_rename_with_parallel_ttl_delete test_rename_column/test.py::test_rename_with_parallel_ttl_move [gw1] [100%] PASSED test_rename_column/test.py::test_rename_with_parallel_ttl_move ==================================== ERRORS ==================================== ____ ERROR at teardown of test_database_with_multiple_non_default_schemas_1 ____ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- checking table 0 Checking table test_schema.postgresql_replica_0 exists in test_database Checking table is synchronized: `test_database`.`test_schema.postgresql_replica_0` checking table 1 Checking table test_schema.postgresql_replica_1 exists in test_database Checking table is synchronized: `test_database`.`test_schema.postgresql_replica_1` checking table 2 Checking table test_schema.postgresql_replica_2 exists in test_database Checking table is synchronized: `test_database`.`test_schema.postgresql_replica_2` checking table 3 Checking table test_schema.postgresql_replica_3 exists in test_database Checking table is synchronized: `test_database`.`test_schema.postgresql_replica_3` checking table 4 Checking table test_schema.postgresql_replica_4 exists in test_database Checking table is synchronized: `test_database`.`test_schema.postgresql_replica_4` synchronization Ok assert show tables Ok ------------------------------ Captured log call ------------------------------- 2026-04-30 17:24:13 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_with_schema" on instance (cluster.py:3602, query) 2026-04-30 17:24:13 [ 413 ] DEBUG : Executing query CREATE DATABASE "postgres_database_with_schema" ENGINE = PostgreSQL('172.16.4.2:5432', 'postgres_database', 'postgres', 'mysecretpassword', 'test_schema') on instance (cluster.py:3602, query) 2026-04-30 17:24:15 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database_with_schema.postgresql_replica_0 SELECT number, number from numbers(1000 * 0, 1000) on instance (cluster.py:3602, query) 2026-04-30 17:24:16 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database_with_schema.postgresql_replica_1 SELECT number, number from numbers(1000 * 0, 1000) on instance (cluster.py:3602, query) 2026-04-30 17:24:18 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database_with_schema.postgresql_replica_2 SELECT number, number from numbers(1000 * 0, 1000) on instance (cluster.py:3602, query) 2026-04-30 17:24:20 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database_with_schema.postgresql_replica_3 SELECT number, number from numbers(1000 * 0, 1000) on instance (cluster.py:3602, query) 2026-04-30 17:24:21 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database_with_schema.postgresql_replica_4 SELECT number, number from numbers(1000 * 0, 1000) on instance (cluster.py:3602, query) 2026-04-30 17:24:22 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3602, query) 2026-04-30 17:24:23 [ 413 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.4.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') SETTINGS materialized_postgresql_tables_list = 'test_schema.postgresql_replica_0, test_schema.postgresql_replica_1, test_schema.postgresql_replica_2, test_schema.postgresql_replica_3, test_schema.postgresql_replica_4', materialized_postgresql_tables_list_with_schema=1 on instance (cluster.py:3602, query) 2026-04-30 17:24:25 [ 413 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3602, query) 2026-04-30 17:24:26 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_0' on instance (cluster.py:3602, query) 2026-04-30 17:24:28 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_0' on instance (cluster.py:3602, query) 2026-04-30 17:24:29 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_0' on instance (cluster.py:3602, query) 2026-04-30 17:24:31 [ 413 ] DEBUG : Executing query select * from `postgres_database_with_schema`.`postgresql_replica_0` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:32 [ 413 ] DEBUG : Executing query select * from `test_database`.`test_schema.postgresql_replica_0` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:34 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_1' on instance (cluster.py:3602, query) 2026-04-30 17:24:35 [ 413 ] DEBUG : Executing query select * from `postgres_database_with_schema`.`postgresql_replica_1` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:36 [ 413 ] DEBUG : Executing query select * from `test_database`.`test_schema.postgresql_replica_1` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:38 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_2' on instance (cluster.py:3602, query) 2026-04-30 17:24:41 [ 413 ] DEBUG : Executing query select * from `postgres_database_with_schema`.`postgresql_replica_2` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:42 [ 413 ] DEBUG : Executing query select * from `test_database`.`test_schema.postgresql_replica_2` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:43 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_3' on instance (cluster.py:3602, query) 2026-04-30 17:24:45 [ 413 ] DEBUG : Executing query select * from `postgres_database_with_schema`.`postgresql_replica_3` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:46 [ 413 ] DEBUG : Executing query select * from `test_database`.`test_schema.postgresql_replica_3` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:48 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_4' on instance (cluster.py:3602, query) 2026-04-30 17:24:50 [ 413 ] DEBUG : Executing query select * from `postgres_database_with_schema`.`postgresql_replica_4` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:52 [ 413 ] DEBUG : Executing query select * from `test_database`.`test_schema.postgresql_replica_4` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:55 [ 413 ] DEBUG : Executing query SHOW TABLES FROM test_database on instance (cluster.py:3602, query) 2026-04-30 17:24:56 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:24:56 [ 413 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:24:57 [ 413 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2026-04-30 17:24:57 [ 413 ] DEBUG : Stdout: 789 ? 00:01:30 clickhouse (cluster.py:121, run_and_check) 2026-04-30 17:24:57 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:24:57 [ 413 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', 'pkill clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:24:57 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:24:57 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:24:59 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:00 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:00 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:01 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:02 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:02 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:03 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:04 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:04 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:07 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:08 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:08 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:08 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:09 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:09 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:11 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:12 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:12 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:13 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:14 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:14 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:15 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:16 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:16 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:18 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:19 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:19 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:21 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:22 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:24 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:37 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:38 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:38 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:40 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:41 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:41 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:42 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:43 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:43 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:44 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:45 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:45 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:46 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:46 [ 413 ] DEBUG : Stdout:1743 (cluster.py:121, run_and_check) 2026-04-30 17:25:47 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:47 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:48 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:48 [ 413 ] DEBUG : Stdout:1743 (cluster.py:121, run_and_check) 2026-04-30 17:25:49 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:49 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:50 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:50 [ 413 ] DEBUG : Stdout:1743 (cluster.py:121, run_and_check) 2026-04-30 17:25:51 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:51 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:52 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:52 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:52 [ 413 ] DEBUG : No clickhouse process running. Start new one. (cluster.py:3964, start_clickhouse) 2026-04-30 17:25:52 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:52 [ 413 ] DEBUG : Command:['docker', 'exec', '-u', '0', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:113, run_and_check) 2026-04-30 17:25:55 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:55 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:56 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:25:56 [ 413 ] DEBUG : Clickhouse process running. (cluster.py:3975, start_clickhouse) 2026-04-30 17:25:56 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:56 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:57 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:25:57 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:25:59 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:00 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:02 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:03 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:05 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:06 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:08 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:09 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:11 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:14 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:14 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:17 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:26:17 [ 413 ] WARNING : ERROR Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) (cluster.py:4008, wait_start) 2026-04-30 17:26:17 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:17 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:19 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:26:19 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:20 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:22 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:23 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:24 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:25 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:26 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:31 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:32 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:34 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:35 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:35 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:37 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:26:37 [ 413 ] WARNING : ERROR Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) (cluster.py:4008, wait_start) 2026-04-30 17:26:37 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:37 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:40 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:26:40 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:42 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:44 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:46 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:47 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:50 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:51 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:53 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:55 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:56 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:58 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:58 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:27:00 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:27:00 [ 413 ] WARNING : ERROR Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) (cluster.py:4008, wait_start) 2026-04-30 17:27:00 [ 413 ] ERROR : No time left to start. But process is still running. Will dump threads. (cluster.py:4013, wait_start) 2026-04-30 17:27:00 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:27:00 [ 413 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:27:02 [ 413 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2026-04-30 17:27:02 [ 413 ] DEBUG : Stdout: 1808 ? 00:00:25 clickhouse (cluster.py:121, run_and_check) 2026-04-30 17:27:02 [ 413 ] INFO : PS RESULT: PID TTY TIME CMD 1808 ? 00:00:25 clickhouse (cluster.py:4019, wait_start) 2026-04-30 17:27:02 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:27:02 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:27:04 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:27:04 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 1808"] (cluster.py:2173, exec_in_container) 2026-04-30 17:27:04 [ 413 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 1808"] (cluster.py:113, run_and_check) 2026-04-30 17:32:04 [ 413 ] WARNING : Current start attempt failed. Will kill 1808 just in case. (cluster.py:3982, start_clickhouse) 2026-04-30 17:32:04 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 1808'] (cluster.py:2173, exec_in_container) 2026-04-30 17:32:04 [ 413 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', 'kill -9 1808'] (cluster.py:113, run_and_check) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:32:11 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:32:14 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) ____ ERROR at teardown of test_database_with_multiple_non_default_schemas_2 ____ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:16 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "clickhouse_postgres_db0" on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:32:17 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:32:18 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) ______ ERROR at teardown of test_database_with_single_non_default_schema _______ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:20 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_with_schema" on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:32:21 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:32:22 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) __________________ ERROR at teardown of test_default_columns ___________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE test_default_columns ( key integer PRIMARY KEY, x integer, y text DEFAULT 'y1', z integer, a text DEFAULT 'a1', b integer); ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:23 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:32:25 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:32:27 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) _________________ ERROR at teardown of test_dependent_loading __________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "test_dependent_loading" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:28 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database.test_dependent_loading SELECT number, number from numbers(0, 50) on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:32:29 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:32:31 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) _____________ ERROR at teardown of test_failed_load_from_snapshot ______________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:33 [ 413 ] DEBUG : Executing query SELECT value FROM system.build_options WHERE name = 'CXX_FLAGS' on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:32:34 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:32:36 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) _________________ ERROR at teardown of test_generated_columns __________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE test_generated_columns ( key integer PRIMARY KEY, x integer DEFAULT 0, temp integer DEFAULT 0, y integer GENERATED ALWAYS AS (x*2) STORED, z text DEFAULT 'z'); ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:38 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:32:39 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:32:40 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) __________ ERROR at teardown of test_generated_columns_with_sequence ___________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE test_generated_columns_with_sequence ( key integer PRIMARY KEY, x integer, y integer GENERATED ALWAYS AS (x*2) STORED, z text); ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:42 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:32:43 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:32:44 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) _________________ ERROR at teardown of test_materialized_view __________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:45 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS test_database on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:32:47 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:32:48 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) ________ ERROR at teardown of test_predefined_connection_configuration _________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:49 [ 413 ] DEBUG : Executing query CREATE DATABASE test_database ENGINE = MaterializedPostgreSQL(postgres1) SETTINGS materialized_postgresql_tables_list='test_table' on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:32:50 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:32:52 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) ________________ ERROR at teardown of test_quoting_publication _________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:56 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres-postgres" on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:32:59 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:33:01 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) ___________ ERROR at teardown of test_remove_table_from_replication ____________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:33:03 [ 413 ] DEBUG : Executing query INSERT INTO `postgres_database`.postgresql_replica_0 SELECT number, number from numbers(10000) on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:33:05 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:33:07 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) __________________ ERROR at teardown of test_replica_consumer __________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "test_replica_consumer" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:33:09 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance2 (cluster.py:3602, query) 2026-04-30 17:33:10 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance2 (cluster.py:3602, query) 2026-04-30 17:33:12 [ 413 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.4.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance2 (cluster.py:3602, query) 2026-04-30 17:33:16 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database.test_replica_consumer SELECT number, number from numbers(0, 50) on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:33:17 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:33:19 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) ____________ ERROR at teardown of test_symbols_in_publication_name _____________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "test_symbols_in_publication_name" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:33:21 [ 413 ] DEBUG : Executing query INSERT INTO `postgres-postgres`.`test_symbols_in_publication_name` SELECT number, number from numbers(0, 50) on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:33:22 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:33:24 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) ___________________ ERROR at teardown of test_table_override ___________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "table_override" ( key Integer NOT NULL, value Text, PRIMARY KEY(key)) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:33:27 [ 413 ] DEBUG : Executing query insert into postgres_database.table_override select number, 'test' from numbers(10) on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:33:29 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:33:31 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) _______________________ ERROR at teardown of test_toast ________________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE "test_toast" (id integer PRIMARY KEY, txt text, other text) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:33:32 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:33:33 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:33:35 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) ___________________ ERROR at teardown of test_too_many_parts ___________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 self = def restart(self): try: > self.clear() helpers/postgres_utility.py:141: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:169: in clear self.drop_materialized_db(db) helpers/postgres_utility.py:263: in drop_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}` SYNC") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException During handling of the above exception, another exception occurred: @pytest.fixture(autouse=True) def setup_teardown(): print("PostgreSQL is available - running test") yield # run test > pg_manager.restart() test_postgresql_replica_database_engine_2/test.py:102: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:144: in restart self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "test_table" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table test_table exists in test_database Checking table is synchronized: `test_database`.`test_table` 51 Checking table test_table exists in test_database Checking table is synchronized: `test_database`.`test_table` ------------------------------ Captured log call ------------------------------- 2026-04-30 17:33:37 [ 413 ] DEBUG : Executing query INSERT INTO `postgres_database2`.`test_table` SELECT number, number from numbers(50) on instance2 (cluster.py:3602, query) 2026-04-30 17:33:40 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance2 (cluster.py:3602, query) 2026-04-30 17:33:41 [ 413 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.4.2:5432', 'postgres_database2', 'postgres', 'mysecretpassword') SETTINGS materialized_postgresql_tables_list = 'test_table', materialized_postgresql_backoff_min_ms = 100, materialized_postgresql_backoff_max_ms = 100 on instance2 (cluster.py:3602, query) 2026-04-30 17:33:45 [ 413 ] DEBUG : Executing query SHOW DATABASES on instance2 (cluster.py:3602, query) 2026-04-30 17:33:47 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_table' on instance2 (cluster.py:3602, query) 2026-04-30 17:33:52 [ 413 ] DEBUG : Executing query select * from `postgres_database2`.`test_table` order by key; on instance2 (cluster.py:3602, query) 2026-04-30 17:33:55 [ 413 ] DEBUG : Executing query select * from `test_database`.`test_table` order by key; on instance2 (cluster.py:3602, query) 2026-04-30 17:34:00 [ 413 ] DEBUG : Executing query SELECT count() FROM test_database.test_table on instance2 (cluster.py:3602, query) 2026-04-30 17:34:04 [ 413 ] DEBUG : Executing query SYSTEM STOP MERGES on instance2 (cluster.py:3602, query) 2026-04-30 17:34:10 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database2.test_table SELECT 50, 50; on instance2 (cluster.py:3602, query) 2026-04-30 17:34:12 [ 413 ] DEBUG : Executing query SELECT count() FROM test_database.test_table on instance2 (cluster.py:3602, query) 2026-04-30 17:34:15 [ 413 ] DEBUG : Executing query SYSTEM FLUSH LOGS on instance2 (cluster.py:3602, query) 2026-04-30 17:35:07 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance2_1 detach:False nothrow:False cmd: ['bash', '-c', '[ -f /var/log/clickhouse-server/clickhouse-server.log ] && zgrep -aH "DB::Exception: Too many parts" /var/log/clickhouse-server/clickhouse-server.log | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:2173, exec_in_container) 2026-04-30 17:35:07 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance2_1', 'bash', '-c', '[ -f /var/log/clickhouse-server/clickhouse-server.log ] && zgrep -aH "DB::Exception: Too many parts" /var/log/clickhouse-server/clickhouse-server.log | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:14.952237 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2776 entries up to offset 15397: Code: 252. DB::Exception: Too many parts (5 with average size of 5.65 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:16.704482 [ 648 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TraceLogElement]: Failed to flush system log system.trace_log with 4 entries up to offset 38: Code: 252. DB::Exception: Too many parts (5 with average size of 1.93 KiB) in table 'system.trace_log (99d5a4fd-4c62-4ca9-a356-846e01e43840)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:18.125394 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 26 entries up to offset 607: Code: 252. DB::Exception: Too many parts (5 with average size of 9.58 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:25.664601 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 62 entries up to offset 669: Code: 252. DB::Exception: Too many parts (5 with average size of 9.58 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:27.866505 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 10 entries up to offset 57: Code: 252. DB::Exception: Too many parts (5 with average size of 133.94 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:33.275189 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 43 entries up to offset 712: Code: 252. DB::Exception: Too many parts (5 with average size of 9.58 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:53.173523 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2778 entries up to offset 28599: Code: 252. DB::Exception: Too many parts (5 with average size of 7.43 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:55.082705 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 9: Code: 252. DB::Exception: Too many parts (5 with average size of 887.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:55.860269 [ 648 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TraceLogElement]: Failed to flush system log system.trace_log with 5 entries up to offset 74: Code: 252. DB::Exception: Too many parts (5 with average size of 2.14 KiB) in table 'system.trace_log (99d5a4fd-4c62-4ca9-a356-846e01e43840)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:03.844450 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2447 entries up to offset 31046: Code: 252. DB::Exception: Too many parts (5 with average size of 7.43 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:12.109000 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 47 entries up to offset 936: Code: 252. DB::Exception: Too many parts (5 with average size of 13.01 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:19.783833 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 22 entries up to offset 958: Code: 252. DB::Exception: Too many parts (5 with average size of 13.01 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:27.495025 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 25 entries up to offset 983: Code: 252. DB::Exception: Too many parts (5 with average size of 13.01 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:36.040834 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 15: Code: 252. DB::Exception: Too many parts (5 with average size of 894.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:36.084717 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 27 entries up to offset 1010: Code: 252. DB::Exception: Too many parts (5 with average size of 13.01 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:39.118427 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 17 entries up to offset 123: Code: 252. DB::Exception: Too many parts (5 with average size of 135.70 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:05.401284 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 15 entries up to offset 138: Code: 252. DB::Exception: Too many parts (5 with average size of 135.70 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:08.329386 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3143 entries up to offset 47108: Code: 252. DB::Exception: Too many parts (5 with average size of 9.45 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:15.714101 [ 648 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TraceLogElement]: Failed to flush system log system.trace_log with 8 entries up to offset 90: Code: 252. DB::Exception: Too many parts (5 with average size of 2.15 KiB) in table 'system.trace_log (99d5a4fd-4c62-4ca9-a356-846e01e43840)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:22.927755 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 44 entries up to offset 1209: Code: 252. DB::Exception: Too many parts (5 with average size of 14.41 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:26.925677 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 25 entries up to offset 163: Code: 252. DB::Exception: Too many parts (5 with average size of 135.70 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:30.774556 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 29 entries up to offset 1238: Code: 252. DB::Exception: Too many parts (5 with average size of 14.41 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:36.996955 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 23: Code: 252. DB::Exception: Too many parts (5 with average size of 903.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:38.342579 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 20 entries up to offset 1258: Code: 252. DB::Exception: Too many parts (5 with average size of 14.41 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:45.750989 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 22 entries up to offset 185: Code: 252. DB::Exception: Too many parts (5 with average size of 135.70 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:46.532823 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 42 entries up to offset 1300: Code: 252. DB::Exception: Too many parts (5 with average size of 14.41 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:54.214636 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 33 entries up to offset 1333: Code: 252. DB::Exception: Too many parts (5 with average size of 14.41 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:01.766611 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 35 entries up to offset 1368: Code: 252. DB::Exception: Too many parts (5 with average size of 14.41 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:25.394727 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3139 entries up to offset 76081: Code: 252. DB::Exception: Too many parts (5 with average size of 13.62 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:35.329032 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4188 entries up to offset 80269: Code: 252. DB::Exception: Too many parts (5 with average size of 13.62 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:43.457836 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3486 entries up to offset 83755: Code: 252. DB::Exception: Too many parts (5 with average size of 13.62 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:46.516659 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 26 entries up to offset 1522: Code: 252. DB::Exception: Too many parts (5 with average size of 15.46 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:52.411083 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2792 entries up to offset 86547: Code: 252. DB::Exception: Too many parts (5 with average size of 13.62 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:54.911672 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 42 entries up to offset 1564: Code: 252. DB::Exception: Too many parts (5 with average size of 15.46 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:02.843039 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 32 entries up to offset 1596: Code: 252. DB::Exception: Too many parts (5 with average size of 15.46 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:04.928663 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 36: Code: 252. DB::Exception: Too many parts (5 with average size of 904.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:10.478028 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 41 entries up to offset 1637: Code: 252. DB::Exception: Too many parts (5 with average size of 15.46 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:18.174195 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 27 entries up to offset 1664: Code: 252. DB::Exception: Too many parts (5 with average size of 15.46 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:19.733754 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 13 entries up to offset 286: Code: 252. DB::Exception: Too many parts (5 with average size of 138.52 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:26.163869 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 28 entries up to offset 1692: Code: 252. DB::Exception: Too many parts (5 with average size of 15.46 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:31.314800 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 12 entries up to offset 298: Code: 252. DB::Exception: Too many parts (5 with average size of 138.52 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:41.570123 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3480 entries up to offset 102238: Code: 252. DB::Exception: Too many parts (5 with average size of 15.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:43.396083 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 11 entries up to offset 309: Code: 252. DB::Exception: Too many parts (5 with average size of 138.52 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:44.210963 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 2 entries up to offset 44: Code: 252. DB::Exception: Too many parts (5 with average size of 906.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:49.653960 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4188 entries up to offset 106426: Code: 252. DB::Exception: Too many parts (5 with average size of 15.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:56.934646 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2788 entries up to offset 109214: Code: 252. DB::Exception: Too many parts (5 with average size of 15.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:06.107435 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2439 entries up to offset 111653: Code: 252. DB::Exception: Too many parts (5 with average size of 15.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:07.449464 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 25 entries up to offset 1900: Code: 252. DB::Exception: Too many parts (5 with average size of 17.83 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:15.112080 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 22 entries up to offset 1922: Code: 252. DB::Exception: Too many parts (5 with average size of 17.83 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:15.437192 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3492 entries up to offset 115145: Code: 252. DB::Exception: Too many parts (5 with average size of 15.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:24.510633 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 22 entries up to offset 1944: Code: 252. DB::Exception: Too many parts (5 with average size of 17.83 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:25.515514 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2792 entries up to offset 117937: Code: 252. DB::Exception: Too many parts (5 with average size of 15.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:26.529231 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 2 entries up to offset 52: Code: 252. DB::Exception: Too many parts (5 with average size of 908.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:32.088351 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 51 entries up to offset 1995: Code: 252. DB::Exception: Too many parts (5 with average size of 17.83 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:39.724104 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 33 entries up to offset 2028: Code: 252. DB::Exception: Too many parts (5 with average size of 17.83 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:47.298108 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 30 entries up to offset 2058: Code: 252. DB::Exception: Too many parts (5 with average size of 17.83 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:57.636716 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 18 entries up to offset 385: Code: 252. DB::Exception: Too many parts (5 with average size of 140.07 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:09.844719 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3141 entries up to offset 133283: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:11.419517 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 11 entries up to offset 396: Code: 252. DB::Exception: Too many parts (5 with average size of 140.07 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:17.849156 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3145 entries up to offset 136428: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:22.677027 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 13 entries up to offset 409: Code: 252. DB::Exception: Too many parts (5 with average size of 140.07 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:25.678253 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2788 entries up to offset 139216: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:27.465445 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 37 entries up to offset 2230: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:35.351059 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 40 entries up to offset 2270: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:37.931833 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2792 entries up to offset 142008: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:42.924636 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 18 entries up to offset 2288: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:43.466749 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 12 entries up to offset 421: Code: 252. DB::Exception: Too many parts (5 with average size of 140.07 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:50.792250 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 66: Code: 252. DB::Exception: Too many parts (5 with average size of 919.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:51.066821 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4192 entries up to offset 146200: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:51.508609 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 24 entries up to offset 2312: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:57.194309 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 21 entries up to offset 442: Code: 252. DB::Exception: Too many parts (5 with average size of 140.07 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:59.841243 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 45 entries up to offset 2357: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:27:05.788869 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4188 entries up to offset 150388: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:27:08.390498 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 33 entries up to offset 2390: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:27:10.706694 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 13 entries up to offset 455: Code: 252. DB::Exception: Too many parts (5 with average size of 140.07 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:27:16.233855 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 29 entries up to offset 2419: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:27:16.889436 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4884 entries up to offset 155272: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:27:23.829885 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 28 entries up to offset 2447: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:05.209874 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 41 entries up to offset 2620: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:06.882796 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3839 entries up to offset 173420: Code: 252. DB::Exception: Too many parts (5 with average size of 18.68 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:12.815502 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 20 entries up to offset 2640: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:16.473252 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3143 entries up to offset 176563: Code: 252. DB::Exception: Too many parts (5 with average size of 18.68 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:20.525712 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 23 entries up to offset 2663: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:25.228793 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3488 entries up to offset 180051: Code: 252. DB::Exception: Too many parts (5 with average size of 18.68 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:28.172597 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 30 entries up to offset 2693: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:37.369955 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2092 entries up to offset 182143: Code: 252. DB::Exception: Too many parts (5 with average size of 18.68 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:37.732533 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 20 entries up to offset 2713: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:42.062105 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 15 entries up to offset 543: Code: 252. DB::Exception: Too many parts (5 with average size of 141.73 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:45.578433 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 18 entries up to offset 2731: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:49.275693 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4186 entries up to offset 186329: Code: 252. DB::Exception: Too many parts (5 with average size of 18.68 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:54.413019 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 46 entries up to offset 2777: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:29:00.100791 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 17 entries up to offset 560: Code: 252. DB::Exception: Too many parts (5 with average size of 141.73 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:29:02.480434 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 35 entries up to offset 2812: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:29:13.208706 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 36 entries up to offset 2848: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:29:16.766539 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3839 entries up to offset 190168: Code: 252. DB::Exception: Too many parts (5 with average size of 18.68 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:29:24.029388 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 18 entries up to offset 578: Code: 252. DB::Exception: Too many parts (5 with average size of 141.73 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:29:28.515839 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 2 entries up to offset 92: Code: 252. DB::Exception: Too many parts (5 with average size of 930.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:07.356617 [ 648 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TraceLogElement]: Failed to flush system log system.trace_log with 4 entries up to offset 142: Code: 252. DB::Exception: Too many parts (5 with average size of 2.46 KiB) in table 'system.trace_log (99d5a4fd-4c62-4ca9-a356-846e01e43840)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:10.721241 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 24 entries up to offset 602: Code: 252. DB::Exception: Too many parts (5 with average size of 141.73 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:15.968027 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 41 entries up to offset 3043: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:24.772383 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 59 entries up to offset 3102: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:32.549689 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 21 entries up to offset 3123: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:32.068158 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 97: Code: 252. DB::Exception: Too many parts (5 with average size of 920.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:39.946712 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 47 entries up to offset 649: Code: 252. DB::Exception: Too many parts (5 with average size of 141.73 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:40.652338 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 42 entries up to offset 3165: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:48.520516 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 28 entries up to offset 3193: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:56.506672 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 26 entries up to offset 3219: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:05.016027 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 24 entries up to offset 3243: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:13.839135 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 33 entries up to offset 3276: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:25.649670 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 5584 entries up to offset 237638: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:34.717171 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3141 entries up to offset 240779: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:44.081237 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2792 entries up to offset 243571: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:56.940765 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 23 entries up to offset 3432: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:12 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:58.854365 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3139 entries up to offset 246710: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:05.572298 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 42 entries up to offset 3474: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:07.697998 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4537 entries up to offset 251247: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:13.999654 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 29 entries up to offset 3503: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:19.575280 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3141 entries up to offset 254388: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:23.856533 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 18 entries up to offset 3521: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:28.434975 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3141 entries up to offset 257529: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:36.186018 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 27 entries up to offset 3548: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:43.099778 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 2 entries up to offset 117: Code: 252. DB::Exception: Too many parts (5 with average size of 937.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:43.503121 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 29 entries up to offset 781: Code: 252. DB::Exception: Too many parts (5 with average size of 143.66 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:44.746366 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 48 entries up to offset 3596: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:48.956011 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3141 entries up to offset 260670: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:51.436152 [ 648 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TraceLogElement]: Failed to flush system log system.trace_log with 2 entries up to offset 152: Code: 252. DB::Exception: Too many parts (5 with average size of 2.48 KiB) in table 'system.trace_log (99d5a4fd-4c62-4ca9-a356-846e01e43840)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:52.868830 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 46 entries up to offset 3642: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:33:05.207330 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 20 entries up to offset 3662: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:33:20.633502 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 6631 entries up to offset 267301: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:33:22.702385 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 21 entries up to offset 802: Code: 252. DB::Exception: Too many parts (5 with average size of 143.66 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:10.390000 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 39 entries up to offset 841: Code: 252. DB::Exception: Too many parts (5 with average size of 143.66 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:15.502563 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 264 entries up to offset 4218: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:22.061016 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 84 entries up to offset 4302: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:26.367533 [ 645 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::QueryLogElement]: Failed to flush system log system.query_log with 6 entries up to offset 36: Code: 252. DB::Exception: Too many parts (5 with average size of 8.27 KiB) in table 'system.query_log (cc93b3c9-11d2-43a7-8f85-cb124f4e3429)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:31.985354 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 139 entries up to offset 4441: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:40.988579 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 64 entries up to offset 4505: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:43.860981 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 46 entries up to offset 887: Code: 252. DB::Exception: Too many parts (5 with average size of 143.66 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:50.817053 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 75 entries up to offset 4580: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:58.987453 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 132: Code: 252. DB::Exception: Too many parts (5 with average size of 941.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:59.559897 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 38 entries up to offset 4618: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:35:06.522874 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 3 entries up to offset 135: Code: 252. DB::Exception: Too many parts (5 with average size of 941.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:35:07.140336 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 78 entries up to offset 4696: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:35:07.399342 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 35 entries up to offset 922: Code: 252. DB::Exception: Too many parts (5 with average size of 143.66 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:35:12.745007 [ 645 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::QueryLogElement]: Failed to flush system log system.query_log with 2 entries up to offset 38: Code: 252. DB::Exception: Too many parts (5 with average size of 8.27 KiB) in table 'system.query_log (cc93b3c9-11d2-43a7-8f85-cb124f4e3429)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:13 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance2_1 detach:False nothrow:False cmd: ['bash', '-c', '[ -f /var/log/clickhouse-server/clickhouse-server.log ] && zgrep -aH "DB::Exception: Too many parts" /var/log/clickhouse-server/clickhouse-server.log | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:2173, exec_in_container) 2026-04-30 17:35:13 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance2_1', 'bash', '-c', '[ -f /var/log/clickhouse-server/clickhouse-server.log ] && zgrep -aH "DB::Exception: Too many parts" /var/log/clickhouse-server/clickhouse-server.log | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:14.952237 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2776 entries up to offset 15397: Code: 252. DB::Exception: Too many parts (5 with average size of 5.65 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:16.704482 [ 648 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TraceLogElement]: Failed to flush system log system.trace_log with 4 entries up to offset 38: Code: 252. DB::Exception: Too many parts (5 with average size of 1.93 KiB) in table 'system.trace_log (99d5a4fd-4c62-4ca9-a356-846e01e43840)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:18.125394 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 26 entries up to offset 607: Code: 252. DB::Exception: Too many parts (5 with average size of 9.58 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:25.664601 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 62 entries up to offset 669: Code: 252. DB::Exception: Too many parts (5 with average size of 9.58 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:27.866505 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 10 entries up to offset 57: Code: 252. DB::Exception: Too many parts (5 with average size of 133.94 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:33.275189 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 43 entries up to offset 712: Code: 252. DB::Exception: Too many parts (5 with average size of 9.58 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:53.173523 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2778 entries up to offset 28599: Code: 252. DB::Exception: Too many parts (5 with average size of 7.43 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:55.082705 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 9: Code: 252. DB::Exception: Too many parts (5 with average size of 887.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:20:55.860269 [ 648 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TraceLogElement]: Failed to flush system log system.trace_log with 5 entries up to offset 74: Code: 252. DB::Exception: Too many parts (5 with average size of 2.14 KiB) in table 'system.trace_log (99d5a4fd-4c62-4ca9-a356-846e01e43840)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:03.844450 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2447 entries up to offset 31046: Code: 252. DB::Exception: Too many parts (5 with average size of 7.43 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:12.109000 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 47 entries up to offset 936: Code: 252. DB::Exception: Too many parts (5 with average size of 13.01 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:19.783833 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 22 entries up to offset 958: Code: 252. DB::Exception: Too many parts (5 with average size of 13.01 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:27.495025 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 25 entries up to offset 983: Code: 252. DB::Exception: Too many parts (5 with average size of 13.01 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:36.040834 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 15: Code: 252. DB::Exception: Too many parts (5 with average size of 894.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:36.084717 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 27 entries up to offset 1010: Code: 252. DB::Exception: Too many parts (5 with average size of 13.01 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:21:39.118427 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 17 entries up to offset 123: Code: 252. DB::Exception: Too many parts (5 with average size of 135.70 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:05.401284 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 15 entries up to offset 138: Code: 252. DB::Exception: Too many parts (5 with average size of 135.70 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:08.329386 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3143 entries up to offset 47108: Code: 252. DB::Exception: Too many parts (5 with average size of 9.45 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:15.714101 [ 648 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TraceLogElement]: Failed to flush system log system.trace_log with 8 entries up to offset 90: Code: 252. DB::Exception: Too many parts (5 with average size of 2.15 KiB) in table 'system.trace_log (99d5a4fd-4c62-4ca9-a356-846e01e43840)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:22.927755 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 44 entries up to offset 1209: Code: 252. DB::Exception: Too many parts (5 with average size of 14.41 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:26.925677 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 25 entries up to offset 163: Code: 252. DB::Exception: Too many parts (5 with average size of 135.70 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:30.774556 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 29 entries up to offset 1238: Code: 252. DB::Exception: Too many parts (5 with average size of 14.41 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:36.996955 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 23: Code: 252. DB::Exception: Too many parts (5 with average size of 903.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:38.342579 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 20 entries up to offset 1258: Code: 252. DB::Exception: Too many parts (5 with average size of 14.41 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:45.750989 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 22 entries up to offset 185: Code: 252. DB::Exception: Too many parts (5 with average size of 135.70 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:46.532823 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 42 entries up to offset 1300: Code: 252. DB::Exception: Too many parts (5 with average size of 14.41 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:22:54.214636 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 33 entries up to offset 1333: Code: 252. DB::Exception: Too many parts (5 with average size of 14.41 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:01.766611 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 35 entries up to offset 1368: Code: 252. DB::Exception: Too many parts (5 with average size of 14.41 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:25.394727 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3139 entries up to offset 76081: Code: 252. DB::Exception: Too many parts (5 with average size of 13.62 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:35.329032 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4188 entries up to offset 80269: Code: 252. DB::Exception: Too many parts (5 with average size of 13.62 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:43.457836 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3486 entries up to offset 83755: Code: 252. DB::Exception: Too many parts (5 with average size of 13.62 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:46.516659 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 26 entries up to offset 1522: Code: 252. DB::Exception: Too many parts (5 with average size of 15.46 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:52.411083 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2792 entries up to offset 86547: Code: 252. DB::Exception: Too many parts (5 with average size of 13.62 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:23:54.911672 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 42 entries up to offset 1564: Code: 252. DB::Exception: Too many parts (5 with average size of 15.46 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:02.843039 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 32 entries up to offset 1596: Code: 252. DB::Exception: Too many parts (5 with average size of 15.46 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:04.928663 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 36: Code: 252. DB::Exception: Too many parts (5 with average size of 904.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:10.478028 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 41 entries up to offset 1637: Code: 252. DB::Exception: Too many parts (5 with average size of 15.46 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:18.174195 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 27 entries up to offset 1664: Code: 252. DB::Exception: Too many parts (5 with average size of 15.46 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:19.733754 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 13 entries up to offset 286: Code: 252. DB::Exception: Too many parts (5 with average size of 138.52 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:26.163869 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 28 entries up to offset 1692: Code: 252. DB::Exception: Too many parts (5 with average size of 15.46 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:31.314800 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 12 entries up to offset 298: Code: 252. DB::Exception: Too many parts (5 with average size of 138.52 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:41.570123 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3480 entries up to offset 102238: Code: 252. DB::Exception: Too many parts (5 with average size of 15.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:43.396083 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 11 entries up to offset 309: Code: 252. DB::Exception: Too many parts (5 with average size of 138.52 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:44.210963 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 2 entries up to offset 44: Code: 252. DB::Exception: Too many parts (5 with average size of 906.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:49.653960 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4188 entries up to offset 106426: Code: 252. DB::Exception: Too many parts (5 with average size of 15.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:24:56.934646 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2788 entries up to offset 109214: Code: 252. DB::Exception: Too many parts (5 with average size of 15.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:06.107435 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2439 entries up to offset 111653: Code: 252. DB::Exception: Too many parts (5 with average size of 15.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:07.449464 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 25 entries up to offset 1900: Code: 252. DB::Exception: Too many parts (5 with average size of 17.83 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:15.112080 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 22 entries up to offset 1922: Code: 252. DB::Exception: Too many parts (5 with average size of 17.83 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:15.437192 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3492 entries up to offset 115145: Code: 252. DB::Exception: Too many parts (5 with average size of 15.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:24.510633 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 22 entries up to offset 1944: Code: 252. DB::Exception: Too many parts (5 with average size of 17.83 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:25.515514 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2792 entries up to offset 117937: Code: 252. DB::Exception: Too many parts (5 with average size of 15.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:26.529231 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 2 entries up to offset 52: Code: 252. DB::Exception: Too many parts (5 with average size of 908.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:32.088351 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 51 entries up to offset 1995: Code: 252. DB::Exception: Too many parts (5 with average size of 17.83 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:39.724104 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 33 entries up to offset 2028: Code: 252. DB::Exception: Too many parts (5 with average size of 17.83 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:47.298108 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 30 entries up to offset 2058: Code: 252. DB::Exception: Too many parts (5 with average size of 17.83 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:25:57.636716 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 18 entries up to offset 385: Code: 252. DB::Exception: Too many parts (5 with average size of 140.07 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:09.844719 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3141 entries up to offset 133283: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:11.419517 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 11 entries up to offset 396: Code: 252. DB::Exception: Too many parts (5 with average size of 140.07 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:17.849156 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3145 entries up to offset 136428: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:22.677027 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 13 entries up to offset 409: Code: 252. DB::Exception: Too many parts (5 with average size of 140.07 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:25.678253 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2788 entries up to offset 139216: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:27.465445 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 37 entries up to offset 2230: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:35.351059 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 40 entries up to offset 2270: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:37.931833 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2792 entries up to offset 142008: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:42.924636 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 18 entries up to offset 2288: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:43.466749 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 12 entries up to offset 421: Code: 252. DB::Exception: Too many parts (5 with average size of 140.07 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:50.792250 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 66: Code: 252. DB::Exception: Too many parts (5 with average size of 919.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:51.066821 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4192 entries up to offset 146200: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:51.508609 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 24 entries up to offset 2312: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:57.194309 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 21 entries up to offset 442: Code: 252. DB::Exception: Too many parts (5 with average size of 140.07 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:26:59.841243 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 45 entries up to offset 2357: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:27:05.788869 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4188 entries up to offset 150388: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:27:08.390498 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 33 entries up to offset 2390: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:27:10.706694 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 13 entries up to offset 455: Code: 252. DB::Exception: Too many parts (5 with average size of 140.07 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:27:16.233855 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 29 entries up to offset 2419: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:27:16.889436 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4884 entries up to offset 155272: Code: 252. DB::Exception: Too many parts (5 with average size of 17.06 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:27:23.829885 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 28 entries up to offset 2447: Code: 252. DB::Exception: Too many parts (5 with average size of 18.84 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:05.209874 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 41 entries up to offset 2620: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:06.882796 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3839 entries up to offset 173420: Code: 252. DB::Exception: Too many parts (5 with average size of 18.68 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:12.815502 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 20 entries up to offset 2640: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:16.473252 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3143 entries up to offset 176563: Code: 252. DB::Exception: Too many parts (5 with average size of 18.68 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:20.525712 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 23 entries up to offset 2663: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:25.228793 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3488 entries up to offset 180051: Code: 252. DB::Exception: Too many parts (5 with average size of 18.68 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:28.172597 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 30 entries up to offset 2693: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:37.369955 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2092 entries up to offset 182143: Code: 252. DB::Exception: Too many parts (5 with average size of 18.68 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:37.732533 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 20 entries up to offset 2713: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:42.062105 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 15 entries up to offset 543: Code: 252. DB::Exception: Too many parts (5 with average size of 141.73 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:45.578433 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 18 entries up to offset 2731: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:49.275693 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4186 entries up to offset 186329: Code: 252. DB::Exception: Too many parts (5 with average size of 18.68 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:28:54.413019 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 46 entries up to offset 2777: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:29:00.100791 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 17 entries up to offset 560: Code: 252. DB::Exception: Too many parts (5 with average size of 141.73 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:29:02.480434 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 35 entries up to offset 2812: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:29:13.208706 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 36 entries up to offset 2848: Code: 252. DB::Exception: Too many parts (5 with average size of 18.66 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:29:16.766539 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3839 entries up to offset 190168: Code: 252. DB::Exception: Too many parts (5 with average size of 18.68 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:29:24.029388 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 18 entries up to offset 578: Code: 252. DB::Exception: Too many parts (5 with average size of 141.73 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:29:28.515839 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 2 entries up to offset 92: Code: 252. DB::Exception: Too many parts (5 with average size of 930.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:07.356617 [ 648 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TraceLogElement]: Failed to flush system log system.trace_log with 4 entries up to offset 142: Code: 252. DB::Exception: Too many parts (5 with average size of 2.46 KiB) in table 'system.trace_log (99d5a4fd-4c62-4ca9-a356-846e01e43840)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:10.721241 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 24 entries up to offset 602: Code: 252. DB::Exception: Too many parts (5 with average size of 141.73 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:15.968027 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 41 entries up to offset 3043: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:24.772383 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 59 entries up to offset 3102: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:32.549689 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 21 entries up to offset 3123: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:32.068158 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 97: Code: 252. DB::Exception: Too many parts (5 with average size of 920.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:39.946712 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 47 entries up to offset 649: Code: 252. DB::Exception: Too many parts (5 with average size of 141.73 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:40.652338 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 42 entries up to offset 3165: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:48.520516 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 28 entries up to offset 3193: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:30:56.506672 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 26 entries up to offset 3219: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:05.016027 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 24 entries up to offset 3243: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:13.839135 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 33 entries up to offset 3276: Code: 252. DB::Exception: Too many parts (5 with average size of 20.60 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:25.649670 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 5584 entries up to offset 237638: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:34.717171 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3141 entries up to offset 240779: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:44.081237 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 2792 entries up to offset 243571: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:56.940765 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 23 entries up to offset 3432: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:31:58.854365 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3139 entries up to offset 246710: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:05.572298 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 42 entries up to offset 3474: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:07.697998 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 4537 entries up to offset 251247: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:13.999654 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 29 entries up to offset 3503: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:19.575280 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3141 entries up to offset 254388: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:23.856533 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 18 entries up to offset 3521: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:28.434975 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3141 entries up to offset 257529: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:36.186018 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 27 entries up to offset 3548: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:43.099778 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 2 entries up to offset 117: Code: 252. DB::Exception: Too many parts (5 with average size of 937.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:43.503121 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 29 entries up to offset 781: Code: 252. DB::Exception: Too many parts (5 with average size of 143.66 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:44.746366 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 48 entries up to offset 3596: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:48.956011 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 3141 entries up to offset 260670: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:51.436152 [ 648 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TraceLogElement]: Failed to flush system log system.trace_log with 2 entries up to offset 152: Code: 252. DB::Exception: Too many parts (5 with average size of 2.48 KiB) in table 'system.trace_log (99d5a4fd-4c62-4ca9-a356-846e01e43840)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:32:52.868830 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 46 entries up to offset 3642: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:33:05.207330 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 20 entries up to offset 3662: Code: 252. DB::Exception: Too many parts (5 with average size of 22.26 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:33:20.633502 [ 654 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::AsynchronousMetricLogElement]: Failed to flush system log system.asynchronous_metric_log with 6631 entries up to offset 267301: Code: 252. DB::Exception: Too many parts (5 with average size of 25.24 KiB) in table 'system.asynchronous_metric_log (3bfff98c-aaa1-471b-90fb-920a96db951c)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:33:22.702385 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 21 entries up to offset 802: Code: 252. DB::Exception: Too many parts (5 with average size of 143.66 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:10.390000 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 39 entries up to offset 841: Code: 252. DB::Exception: Too many parts (5 with average size of 143.66 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:15.502563 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 264 entries up to offset 4218: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:22.061016 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 84 entries up to offset 4302: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:26.367533 [ 645 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::QueryLogElement]: Failed to flush system log system.query_log with 6 entries up to offset 36: Code: 252. DB::Exception: Too many parts (5 with average size of 8.27 KiB) in table 'system.query_log (cc93b3c9-11d2-43a7-8f85-cb124f4e3429)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:31.985354 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 139 entries up to offset 4441: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:40.988579 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 64 entries up to offset 4505: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:43.860981 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 46 entries up to offset 887: Code: 252. DB::Exception: Too many parts (5 with average size of 143.66 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:50.817053 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 75 entries up to offset 4580: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:58.987453 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 1 entries up to offset 132: Code: 252. DB::Exception: Too many parts (5 with average size of 941.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:34:59.559897 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 38 entries up to offset 4618: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:35:06.522874 [ 651 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::ErrorLogElement]: Failed to flush system log system.error_log with 3 entries up to offset 135: Code: 252. DB::Exception: Too many parts (5 with average size of 941.00 B) in table 'system.error_log (900c6821-cdb2-4509-9986-a307723e4562)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:35:07.140336 [ 650 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::TextLogElement]: Failed to flush system log system.text_log with 78 entries up to offset 4696: Code: 252. DB::Exception: Too many parts (5 with average size of 26.23 KiB) in table 'system.text_log (e80cd2df-122e-4dec-981c-52a37e3ba972)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:35:07.399342 [ 652 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::MetricLogElement]: Failed to flush system log system.metric_log with 35 entries up to offset 922: Code: 252. DB::Exception: Too many parts (5 with average size of 143.66 KiB) in table 'system.metric_log (437a1911-d05a-42f1-bcda-958599e54e1f)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Stdout:/var/log/clickhouse-server/clickhouse-server.log:2026.04.30 17:35:12.745007 [ 645 ] {} void DB::SystemLog::flushImpl(const std::vector &, uint64_t) [LogElement = DB::QueryLogElement]: Failed to flush system log system.query_log with 2 entries up to offset 38: Code: 252. DB::Exception: Too many parts (5 with average size of 8.27 KiB) in table 'system.query_log (cc93b3c9-11d2-43a7-8f85-cb124f4e3429)'. Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below): (cluster.py:121, run_and_check) 2026-04-30 17:35:14 [ 413 ] DEBUG : Executing query SELECT count() FROM test_database.test_table on instance2 (cluster.py:3602, query) 2026-04-30 17:35:17 [ 413 ] DEBUG : Executing query SYSTEM START MERGES on instance2 (cluster.py:3602, query) 2026-04-30 17:35:20 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_table' on instance2 (cluster.py:3602, query) 2026-04-30 17:35:27 [ 413 ] DEBUG : Executing query select * from `postgres_database2`.`test_table` order by key; on instance2 (cluster.py:3602, query) 2026-04-30 17:35:29 [ 413 ] DEBUG : Executing query select * from `test_database`.`test_table` order by key; on instance2 (cluster.py:3602, query) 2026-04-30 17:35:32 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance2 (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:35:38 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3602, query) 2026-04-30 17:35:42 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3602, query) 2026-04-30 17:35:45 [ 413 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_2/_instances_0_gw2/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine2_gw2', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_2/_instances_0_gw2/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_2/_instances_0_gw2/instance2/docker-compose.yml', 'stop', '--timeout', '20'] (cluster.py:113, run_and_check) 2026-04-30 17:35:53 [ 413 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine2_gw2_instance2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:35:53 [ 413 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:35:53 [ 413 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine2_gw2_postgres1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:35:53 [ 413 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:35:53 [ 413 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine2_gw2_postgres1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:35:53 [ 413 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine2_gw2_instance2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:35:53 [ 413 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_postgresql_replica_database_engine_2/_instances_0_gw2/instance/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_postgresql_replica_database_engine_2/_instances_0_gw2/instance/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:35:53 [ 413 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_postgresql_replica_database_engine_2/_instances_0_gw2/instance2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_postgresql_replica_database_engine_2/_instances_0_gw2/instance2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:35:54 [ 413 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_2/_instances_0_gw2/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine2_gw2', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_2/_instances_0_gw2/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_2/_instances_0_gw2/instance2/docker-compose.yml', 'down', '--volumes'] (cluster.py:113, run_and_check) 2026-04-30 17:35:58 [ 413 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine2_gw2_instance2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:35:58 [ 413 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:35:58 [ 413 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine2_gw2_postgres1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:35:58 [ 413 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine2_gw2_instance2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:35:58 [ 413 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:35:58 [ 413 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine2_gw2_postgres1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:35:58 [ 413 ] DEBUG : Stderr:Removing network roottestpostgresqlreplicadatabaseengine2_gw2_default (cluster.py:123, run_and_check) 2026-04-30 17:35:58 [ 413 ] DEBUG : Cleanup called (cluster.py:876, cleanup) 2026-04-30 17:35:58 [ 413 ] DEBUG : Docker networks for project roottestpostgresqlreplicadatabaseengine2_gw2 are NETWORK ID NAME DRIVER SCOPE (cluster.py:855, print_all_docker_pieces) 2026-04-30 17:35:58 [ 413 ] DEBUG : Docker containers for project roottestpostgresqlreplicadatabaseengine2_gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:863, print_all_docker_pieces) 2026-04-30 17:35:59 [ 413 ] DEBUG : Docker volumes for project roottestpostgresqlreplicadatabaseengine2_gw2 are DRIVER VOLUME NAME (cluster.py:871, print_all_docker_pieces) 2026-04-30 17:35:59 [ 413 ] DEBUG : Command:docker container list --all --filter name='^/roottestpostgresqlreplicadatabaseengine2_gw2_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2026-04-30 17:35:59 [ 413 ] DEBUG : Unstopped containers: {} (cluster.py:890, cleanup) 2026-04-30 17:35:59 [ 413 ] DEBUG : No running containers for project: roottestpostgresqlreplicadatabaseengine2_gw2 (cluster.py:904, cleanup) 2026-04-30 17:35:59 [ 413 ] DEBUG : Trying to prune unused networks... (cluster.py:910, cleanup) 2026-04-30 17:36:00 [ 413 ] DEBUG : Trying to prune unused images... (cluster.py:926, cleanup) 2026-04-30 17:36:00 [ 413 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2026-04-30 17:36:00 [ 413 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:121, run_and_check) 2026-04-30 17:36:00 [ 413 ] DEBUG : Images pruned (cluster.py:929, cleanup) 2026-04-30 17:36:00 [ 413 ] DEBUG : Trying to prune unused volumes... (cluster.py:935, cleanup) 2026-04-30 17:36:00 [ 413 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2026-04-30 17:36:00 [ 413 ] DEBUG : Stdout:1 (cluster.py:121, run_and_check) =================================== FAILURES =================================== ___________________________ test_rename_distributed ____________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_rename_distributed(started_cluster): table_name = "test_rename_distributed" try: create_distributed_table(node1, table_name) insert(node1, table_name, 1000) rename_column_on_cluster(node1, table_name, "num2", "foo2") > rename_column_on_cluster(node1, "%s_replicated" % table_name, "num2", "foo2") test_rename_column/test.py:712: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_rename_column/test.py:255: in rename_column_on_cluster node.query( helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 10, stderr: Received exception from server (version 24.8.14): E Code: 10. DB::Exception: Received from 172.16.8.6:9000. DB::Exception: There was an error on [node2:9000]: Code: 10. DB::Exception: Wrong column name. Cannot find column `num2` to rename. Maybe you meant: ['num']. (NOT_FOUND_COLUMN_IN_BLOCK) (version 24.8.14.10545.altinitytest (altinity build)). Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x00000000343c5254 E 1. ./build_docker/./src/Common/Exception.cpp:111: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001adb62c9 E 2. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000aa94445 E 3. ./src/Common/Exception.h:0: DB::DDLQueryStatusSource::generate() @ 0x0000000029b6cb13 E 4. ./src/Processors/Chunk.h:110: DB::ISource::tryGenerate() @ 0x000000002d398878 E 5. ./build_docker/./src/Processors/ISource.cpp:0: DB::ISource::work() @ 0x000000002d397d01 E 6. ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:0: DB::ExecutionThreadContext::executeTask() @ 0x000000002d3d1c4e E 7. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:273: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic*) @ 0x000000002d3b8a31 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: DB::PipelineExecutor::executeImpl(unsigned long, bool) @ 0x000000002d3b73dc E 9. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:274: DB::PipelineExecutor::execute(unsigned long, bool) @ 0x000000002d3b6edb E 10. ./build_docker/./src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:94: void std::__function::__policy_invoker::__call_impl::ThreadFromGlobalPoolImpl(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000002d3da957 E 11. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000001af7fcb2 E 12. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: void* std::__thread_proxy[abi:v15007]>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*>>(void*) @ 0x000000001af8c0b5 E 13. asan_thread_start(void*) @ 0x000000000aa49059 E 14. ? @ 0x00007ffa1cc6cac3 E 15. ? @ 0x00007ffa1ccfe850 E . (NOT_FOUND_COLUMN_IN_BLOCK) E (query: ALTER TABLE test_rename_distributed_replicated ON CLUSTER test_cluster RENAME COLUMN num2 to foo2) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ------------------------------ Captured log setup ------------------------------ 2026-04-30 17:18:25 [ 410 ] DEBUG : Command:['docker ps | wc -l'] (cluster.py:113, run_and_check) 2026-04-30 17:18:25 [ 410 ] DEBUG : Stdout:1 (cluster.py:121, run_and_check) 2026-04-30 17:18:25 [ 410 ] DEBUG : No running containers (conftest.py:95, cleanup_environment) 2026-04-30 17:18:25 [ 410 ] DEBUG : Pruning Docker networks (conftest.py:97, cleanup_environment) 2026-04-30 17:18:25 [ 410 ] DEBUG : Command:['docker network prune --force'] (cluster.py:113, run_and_check) 2026-04-30 17:18:25 [ 410 ] DEBUG : Command:["sysctl net.ipv4.ip_local_port_range='55000 65535'"] (cluster.py:113, run_and_check) 2026-04-30 17:18:25 [ 410 ] DEBUG : Stdout:net.ipv4.ip_local_port_range = 55000 65535 (cluster.py:121, run_and_check) 2026-04-30 17:18:25 [ 410 ] INFO : Running tests in /ClickHouse/tests/integration/test_rename_column/test.py (cluster.py:2788, start) 2026-04-30 17:18:25 [ 410 ] DEBUG : Cluster start called. is_up=False (cluster.py:2795, start) 2026-04-30 17:18:25 [ 410 ] DEBUG : Docker networks for project roottestrenamecolumn_gw1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:855, print_all_docker_pieces) 2026-04-30 17:18:25 [ 410 ] DEBUG : Docker containers for project roottestrenamecolumn_gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:863, print_all_docker_pieces) 2026-04-30 17:18:25 [ 410 ] DEBUG : Docker volumes for project roottestrenamecolumn_gw1 are DRIVER VOLUME NAME (cluster.py:871, print_all_docker_pieces) 2026-04-30 17:18:25 [ 410 ] DEBUG : Cleanup called (cluster.py:876, cleanup) 2026-04-30 17:18:25 [ 410 ] DEBUG : Docker networks for project roottestrenamecolumn_gw1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:855, print_all_docker_pieces) 2026-04-30 17:18:25 [ 410 ] DEBUG : Docker containers for project roottestrenamecolumn_gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:863, print_all_docker_pieces) 2026-04-30 17:18:25 [ 410 ] DEBUG : Docker volumes for project roottestrenamecolumn_gw1 are DRIVER VOLUME NAME (cluster.py:871, print_all_docker_pieces) 2026-04-30 17:18:25 [ 410 ] DEBUG : Command:docker container list --all --filter name='^/roottestrenamecolumn_gw1_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2026-04-30 17:18:26 [ 410 ] DEBUG : Unstopped containers: {} (cluster.py:890, cleanup) 2026-04-30 17:18:26 [ 410 ] DEBUG : No running containers for project: roottestrenamecolumn_gw1 (cluster.py:904, cleanup) 2026-04-30 17:18:26 [ 410 ] DEBUG : Trying to prune unused networks... (cluster.py:910, cleanup) 2026-04-30 17:18:26 [ 410 ] DEBUG : Trying to prune unused images... (cluster.py:926, cleanup) 2026-04-30 17:18:26 [ 410 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2026-04-30 17:18:26 [ 410 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:121, run_and_check) 2026-04-30 17:18:26 [ 410 ] DEBUG : Images pruned (cluster.py:929, cleanup) 2026-04-30 17:18:26 [ 410 ] DEBUG : Trying to prune unused volumes... (cluster.py:935, cleanup) 2026-04-30 17:18:26 [ 410 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2026-04-30 17:18:26 [ 410 ] DEBUG : Stdout:1 (cluster.py:121, run_and_check) 2026-04-30 17:18:26 [ 410 ] DEBUG : Setup directory for instance: node1 (cluster.py:2808, start) 2026-04-30 17:18:26 [ 410 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4534, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Create directory for common tests configuration (cluster.py:4539, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Copy common configuration from helpers (cluster.py:4559, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Generate and write macros file (cluster.py:4602, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_rename_column/configs/remote_servers.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/instant_moves.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/part_log.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/zookeeper_session_timeout.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/storage_configuration.xml'] to /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node1/configs/config.d (cluster.py:4632, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node1/database (cluster.py:4649, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node1/logs (cluster.py:4660, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log"] (cluster.py:4746, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Setup directory for instance: node2 (cluster.py:2808, start) 2026-04-30 17:18:26 [ 410 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4534, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Create directory for common tests configuration (cluster.py:4539, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Copy common configuration from helpers (cluster.py:4559, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Generate and write macros file (cluster.py:4602, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_rename_column/configs/remote_servers.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/instant_moves.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/part_log.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/zookeeper_session_timeout.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/storage_configuration.xml'] to /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node2/configs/config.d (cluster.py:4632, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node2/database (cluster.py:4649, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node2/logs (cluster.py:4660, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log"] (cluster.py:4746, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Setup directory for instance: node3 (cluster.py:2808, start) 2026-04-30 17:18:26 [ 410 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4534, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Create directory for common tests configuration (cluster.py:4539, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Copy common configuration from helpers (cluster.py:4559, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Generate and write macros file (cluster.py:4602, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_rename_column/configs/remote_servers.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/instant_moves.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/part_log.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/zookeeper_session_timeout.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/storage_configuration.xml'] to /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node3/configs/config.d (cluster.py:4632, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node3/database (cluster.py:4649, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node3/logs (cluster.py:4660, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log"] (cluster.py:4746, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Setup directory for instance: node4 (cluster.py:2808, start) 2026-04-30 17:18:26 [ 410 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4534, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Create directory for common tests configuration (cluster.py:4539, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Copy common configuration from helpers (cluster.py:4559, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Generate and write macros file (cluster.py:4602, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_rename_column/configs/remote_servers.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/instant_moves.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/part_log.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/zookeeper_session_timeout.xml', '/ClickHouse/tests/integration/test_rename_column/configs/config.d/storage_configuration.xml'] to /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node4/configs/config.d (cluster.py:4632, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node4/database (cluster.py:4649, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node4/logs (cluster.py:4660, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log"] (cluster.py:4746, create_dir) 2026-04-30 17:18:26 [ 410 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:1e0b53d756cf', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/.env (cluster.py:86, _create_env_file) 2026-04-30 17:18:26 [ 410 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2026-04-30 17:18:26 [ 410 ] DEBUG : No config file found (config.py:28, find_config_file) 2026-04-30 17:18:26 [ 410 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2026-04-30 17:18:26 [ 410 ] DEBUG : No config file found (config.py:28, find_config_file) 2026-04-30 17:18:26 [ 410 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 824 (connectionpool.py:547, _make_request) 2026-04-30 17:18:26 [ 410 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/.env', '--project-name', 'roottestrenamecolumn_gw1', '--file', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node2/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node3/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node4/docker-compose.yml', 'pull'] (cluster.py:113, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo2 ... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo1 ... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo3 ... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node3 ... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node2 ... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node4 ... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node1 ... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node2 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo2 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node3 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo1 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node4 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node2 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node2 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo2 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo2 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node3 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node3 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo2 ... done (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node2 ... done (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node3 ... done (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo1 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo1 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node4 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node4 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node4 ... done (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node1 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo3 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node1 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node1 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling node1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo3 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo3 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Stderr:Pulling zoo3 ... done (cluster.py:123, run_and_check) 2026-04-30 17:19:53 [ 410 ] DEBUG : Setup ZooKeeper (cluster.py:2849, start) 2026-04-30 17:19:53 [ 410 ] DEBUG : Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/log', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/config', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/coordination', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper2/log', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper2/config', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper2/coordination', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/log', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/config', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/coordination'] (cluster.py:2850, start) 2026-04-30 17:19:53 [ 410 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/.env', '--project-name', 'roottestrenamecolumn_gw1', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--verbose', 'up', '-d'] (cluster.py:113, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.config.config.find: Using configuration files: /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.docker_client.get_client: docker-compose version 1.29.2, build unknown (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:docker-py version: (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:CPython version: 3.10.12 (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:OpenSSL version: OpenSSL 3.0.2 15 Mar 2022 (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.docker_client.get_client: Docker base_url: http+docker://localhost (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.docker_client.get_client: Docker version: Platform={'Name': 'Docker Engine - Community'}, Components=[{'Name': 'Engine', 'Version': '23.0.6', 'Details': {'ApiVersion': '1.42', 'Arch': 'amd64', 'BuildTime': '2023-05-05T21:18:13.000000000+00:00', 'Experimental': 'false', 'GitCommit': '9dbdbd4', 'GoVersion': 'go1.19.9', 'KernelVersion': '5.15.0-130-generic', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': '1.7.25', 'Details': {'GitCommit': 'bcc810d6b9066471b0b6fa75f557a15a1cbf31bb'}}, {'Name': 'runc', 'Version': '1.2.4', 'Details': {'GitCommit': 'v1.2.4-0-g6c52b3f'}}, {'Name': 'docker-init', 'Version': '0.19.0', 'Details': {'GitCommit': 'de40ad0'}}], Version=23.0.6, ApiVersion=1.42, MinAPIVersion=1.12, GitCommit=9dbdbd4, GoVersion=go1.19.9, Os=linux, Arch=amd64, KernelVersion=5.15.0-130-generic, BuildTime=2023-05-05T21:18:13.000000000+00:00 (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottestrenamecolumngw1_default') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker info <- () (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker info -> {'Architecture': 'x86_64', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'BridgeNfIp6tables': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'BridgeNfIptables': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'CPUSet': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'CPUShares': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'CgroupDriver': 'cgroupfs', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'CgroupVersion': '2', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'ContainerdCommit': {'Expected': 'bcc810d6b9066471b0b6fa75f557a15a1cbf31bb', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'ID': 'bcc810d6b9066471b0b6fa75f557a15a1cbf31bb'}, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Containers': 11, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottestrenamecolumn_gw1_default') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.network.ensure: Creating network "roottestrenamecolumn_gw1_default" with the default driver (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_network <- (name='roottestrenamecolumn_gw1_default', driver=None, options=None, ipam=None, internal=False, enable_ipv6=False, labels={'com.docker.compose.project': 'roottestrenamecolumn_gw1', 'com.docker.compose.network': 'default', 'com.docker.compose.version': '1.29.2'}, attachable=True, check_duplicate=True) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_network -> {'Id': 'c47d3cd7cf2890821908d4341586cde7d0d6263109a5feff16a413fd11f154bb', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Warning': ''} (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=roottestrenamecolumn_gw1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=roottestrenamecolumngw1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumn_gw1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumngw1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumn_gw1', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumngw1', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumn_gw1', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumngw1', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumn_gw1', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumngw1', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumn_gw1', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumngw1', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumn_gw1', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumngw1', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumn_gw1', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumngw1', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {, , } (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumn_gw1', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumn_gw1', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumngw1', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumn_gw1', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_zoo3_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumngw1', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {ServiceName(project='roottestrenamecolumn_gw1', service='zoo3', number=1)} (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for ServiceName(project='roottestrenamecolumn_gw1', service='zoo3', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestrenamecolumngw1', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_zoo2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {ServiceName(project='roottestrenamecolumn_gw1', service='zoo2', number=1)} (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for ServiceName(project='roottestrenamecolumn_gw1', service='zoo2', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_zoo1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {ServiceName(project='roottestrenamecolumn_gw1', service='zoo1', number=1)} (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for ServiceName(project='roottestrenamecolumn_gw1', service='zoo1', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.service.build_container_labels: Added config hash: 20e6dc743703ec516f41e21f2dfa9e0902f5cfc13dd242d089e82b90d5767de0 (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (links=[], port_bindings={}, binds=[], volumes_from=[], privileged=False, network_mode='roottestrenamecolumn_gw1_default', devices=None, device_requests=None, dns=None, dns_opt=['attempts:2', 'timeout:1', 'inet6', 'rotate'], dns_search=None, restart_policy={'Name': 'always', 'MaximumRetryCount': 0}, runtime=None, cap_add=['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], cap_drop=None, mem_limit=None, mem_reservation=None, memswap_limit=None, ulimits=None, log_config={'Type': '', 'Config': {}}, extra_hosts=None, read_only=None, pid_mode=None, security_opt=['label:disable'], ipc_mode=None, cgroup_parent=None, cpu_quota=None, shm_size=None, sysctls=None, pids_limit=None, tmpfs=None, oom_kill_disable=None, oom_score_adj=None, mem_swappiness=None, group_add=None, userns_mode=None, init=None, init_path=None, isolation=None, cpu_count=None, cpu_percent=None, nano_cpus=None, volume_driver=None, cpuset_cpus=None, cpu_shares=None, storage_opt=None, blkio_weight=None, blkio_weight_device=None, device_read_bps=None, device_read_iops=None, device_write_bps=None, device_write_iops=None, mounts=[{'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}], device_cgroup_rules=None, cpu_period=None, cpu_rt_period=None, cpu_rt_runtime=None) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Links': [], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'LogConfig': {'Config': {}, 'Type': ''}, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Mounts': [{'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/log', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Target': '/var/log/clickhouse-keeper', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Type': 'bind'}, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: {'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container <- (entrypoint='clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config1.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log', image='altinityinfra/integration-test:1e0b53d756cf', user='0', volumes={}, name='roottestrenamecolumn_gw1_zoo1_1', detach=True, environment=[], labels={'com.docker.compose.project': 'roottestrenamecolumn_gw1', 'com.docker.compose.service': 'zoo1', 'com.docker.compose.oneoff': 'False', 'com.docker.compose.project.working_dir': '/ClickHouse/tests/integration/compose', 'com.docker.compose.project.config_files': '/ClickHouse/tests/integration/compose/docker_compose_keeper.yml', 'com.docker.compose.project.environment_file': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/.env', 'com.docker.compose.container-number': '1', 'com.docker.compose.version': '1.29.2', 'com.docker.compose.config-hash': '20e6dc743703ec516f41e21f2dfa9e0902f5cfc13dd242d089e82b90d5767de0'}, host_config={'NetworkMode': 'roottestrenamecolumn_gw1_default', 'RestartPolicy': {'Name': 'always', 'MaximumRetryCount': 0}, 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], 'SecurityOpt': ['label:disable'], 'VolumesFrom': [], 'Binds': [], 'PortBindings': {}, 'Links': [], 'LogConfig': {'Type': '', 'Config': {}}, 'Mounts': [{'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}]}, networking_config={'EndpointsConfig': {'roottestrenamecolumn_gw1_default': {'Aliases': ['zoo1'], 'IPAMConfig': {}}}}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.service.build_container_labels: Added config hash: 7ebd0736bf4a4cdba4940fbdd729a79c484b05ab8ac3453f84c567d7f42065c3 (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (links=[], port_bindings={}, binds=[], volumes_from=[], privileged=False, network_mode='roottestrenamecolumn_gw1_default', devices=None, device_requests=None, dns=None, dns_opt=['attempts:2', 'timeout:1', 'inet6', 'rotate'], dns_search=None, restart_policy={'Name': 'always', 'MaximumRetryCount': 0}, runtime=None, cap_add=['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], cap_drop=None, mem_limit=None, mem_reservation=None, memswap_limit=None, ulimits=None, log_config={'Type': '', 'Config': {}}, extra_hosts=None, read_only=None, pid_mode=None, security_opt=['label:disable'], ipc_mode=None, cgroup_parent=None, cpu_quota=None, shm_size=None, sysctls=None, pids_limit=None, tmpfs=None, oom_kill_disable=None, oom_score_adj=None, mem_swappiness=None, group_add=None, userns_mode=None, init=None, init_path=None, isolation=None, cpu_count=None, cpu_percent=None, nano_cpus=None, volume_driver=None, cpuset_cpus=None, cpu_shares=None, storage_opt=None, blkio_weight=None, blkio_weight_device=None, device_read_bps=None, device_read_iops=None, device_write_bps=None, device_write_iops=None, mounts=[{'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper2/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper2/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper2/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}], device_cgroup_rules=None, cpu_period=None, cpu_rt_period=None, cpu_rt_runtime=None) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Links': [], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'LogConfig': {'Config': {}, 'Type': ''}, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Mounts': [{'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Source': '/clickhouse', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Target': '/usr/bin/clickhouse-keeper', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Type': 'bind'}, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: {'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.service.build_container_labels: Added config hash: c506871e3106d606d44c2e24beff9c25b211e59eedc32d45c39d0642db33cf8c (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container <- (entrypoint='clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config2.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log', image='altinityinfra/integration-test:1e0b53d756cf', user='0', volumes={}, name='roottestrenamecolumn_gw1_zoo2_1', detach=True, environment=[], labels={'com.docker.compose.project': 'roottestrenamecolumn_gw1', 'com.docker.compose.service': 'zoo2', 'com.docker.compose.oneoff': 'False', 'com.docker.compose.project.working_dir': '/ClickHouse/tests/integration/compose', 'com.docker.compose.project.config_files': '/ClickHouse/tests/integration/compose/docker_compose_keeper.yml', 'com.docker.compose.project.environment_file': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/.env', 'com.docker.compose.container-number': '1', 'com.docker.compose.version': '1.29.2', 'com.docker.compose.config-hash': '7ebd0736bf4a4cdba4940fbdd729a79c484b05ab8ac3453f84c567d7f42065c3'}, host_config={'NetworkMode': 'roottestrenamecolumn_gw1_default', 'RestartPolicy': {'Name': 'always', 'MaximumRetryCount': 0}, 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], 'SecurityOpt': ['label:disable'], 'VolumesFrom': [], 'Binds': [], 'PortBindings': {}, 'Links': [], 'LogConfig': {'Type': '', 'Config': {}}, 'Mounts': [{'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper2/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper2/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper2/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}]}, networking_config={'EndpointsConfig': {'roottestrenamecolumn_gw1_default': {'Aliases': ['zoo2'], 'IPAMConfig': {}}}}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (links=[], port_bindings={}, binds=[], volumes_from=[], privileged=False, network_mode='roottestrenamecolumn_gw1_default', devices=None, device_requests=None, dns=None, dns_opt=['attempts:2', 'timeout:1', 'inet6', 'rotate'], dns_search=None, restart_policy={'Name': 'always', 'MaximumRetryCount': 0}, runtime=None, cap_add=['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], cap_drop=None, mem_limit=None, mem_reservation=None, memswap_limit=None, ulimits=None, log_config={'Type': '', 'Config': {}}, extra_hosts=None, read_only=None, pid_mode=None, security_opt=['label:disable'], ipc_mode=None, cgroup_parent=None, cpu_quota=None, shm_size=None, sysctls=None, pids_limit=None, tmpfs=None, oom_kill_disable=None, oom_score_adj=None, mem_swappiness=None, group_add=None, userns_mode=None, init=None, init_path=None, isolation=None, cpu_count=None, cpu_percent=None, nano_cpus=None, volume_driver=None, cpuset_cpus=None, cpu_shares=None, storage_opt=None, blkio_weight=None, blkio_weight_device=None, device_read_bps=None, device_read_iops=None, device_write_bps=None, device_write_iops=None, mounts=[{'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}], device_cgroup_rules=None, cpu_period=None, cpu_rt_period=None, cpu_rt_runtime=None) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Links': [], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'LogConfig': {'Config': {}, 'Type': ''}, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Mounts': [{'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/log', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Target': '/var/log/clickhouse-keeper', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Type': 'bind'}, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: {'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container <- (entrypoint='clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config3.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log', image='altinityinfra/integration-test:1e0b53d756cf', user='0', volumes={}, name='roottestrenamecolumn_gw1_zoo3_1', detach=True, environment=[], labels={'com.docker.compose.project': 'roottestrenamecolumn_gw1', 'com.docker.compose.service': 'zoo3', 'com.docker.compose.oneoff': 'False', 'com.docker.compose.project.working_dir': '/ClickHouse/tests/integration/compose', 'com.docker.compose.project.config_files': '/ClickHouse/tests/integration/compose/docker_compose_keeper.yml', 'com.docker.compose.project.environment_file': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/.env', 'com.docker.compose.container-number': '1', 'com.docker.compose.version': '1.29.2', 'com.docker.compose.config-hash': 'c506871e3106d606d44c2e24beff9c25b211e59eedc32d45c39d0642db33cf8c'}, host_config={'NetworkMode': 'roottestrenamecolumn_gw1_default', 'RestartPolicy': {'Name': 'always', 'MaximumRetryCount': 0}, 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], 'SecurityOpt': ['label:disable'], 'VolumesFrom': [], 'Binds': [], 'PortBindings': {}, 'Links': [], 'LogConfig': {'Type': '', 'Config': {}}, 'Mounts': [{'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper3/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}]}, networking_config={'EndpointsConfig': {'roottestrenamecolumn_gw1_default': {'Aliases': ['zoo3'], 'IPAMConfig': {}}}}) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': 'fe7a8c3135640ca8df8333f0f49585d03e3c7892687a3db9cdc015cc869b40d7', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Warnings': []} (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('fe7a8c3135640ca8df8333f0f49585d03e3c7892687a3db9cdc015cc869b40d7') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': 'f89c1f336f8ef2b73d52b451fde1aa9748b91158bf98adf66d0be6c907676e7c', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Warnings': []} (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('f89c1f336f8ef2b73d52b451fde1aa9748b91158bf98adf66d0be6c907676e7c') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Args': ['keeper', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: '--config=/etc/clickhouse-keeper/keeper_config1.xml', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: '--log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: '--errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Cmd': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('fe7a8c3135640ca8df8333f0f49585d03e3c7892687a3db9cdc015cc869b40d7', 'roottestrenamecolumn_gw1_default') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Args': ['keeper', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: '--config=/etc/clickhouse-keeper/keeper_config2.xml', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: '--log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: '--errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Cmd': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('f89c1f336f8ef2b73d52b451fde1aa9748b91158bf98adf66d0be6c907676e7c', 'roottestrenamecolumn_gw1_default') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': 'd878ad8dfe43d92775e4e829529b2de5138c223b7f59b9515b808425e1d9776a', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Warnings': []} (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('d878ad8dfe43d92775e4e829529b2de5138c223b7f59b9515b808425e1d9776a') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('fe7a8c3135640ca8df8333f0f49585d03e3c7892687a3db9cdc015cc869b40d7', 'roottestrenamecolumn_gw1_default', aliases=['zoo1', 'fe7a8c313564'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Args': ['keeper', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: '--config=/etc/clickhouse-keeper/keeper_config3.xml', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: '--log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: '--errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log'], (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Cmd': None, (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('d878ad8dfe43d92775e4e829529b2de5138c223b7f59b9515b808425e1d9776a', 'roottestrenamecolumn_gw1_default') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('f89c1f336f8ef2b73d52b451fde1aa9748b91158bf98adf66d0be6c907676e7c', 'roottestrenamecolumn_gw1_default', aliases=['f89c1f336f8e', 'zoo2'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start <- ('fe7a8c3135640ca8df8333f0f49585d03e3c7892687a3db9cdc015cc869b40d7') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('d878ad8dfe43d92775e4e829529b2de5138c223b7f59b9515b808425e1d9776a', 'roottestrenamecolumn_gw1_default', aliases=['zoo3', 'd878ad8dfe43'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start <- ('f89c1f336f8ef2b73d52b451fde1aa9748b91158bf98adf66d0be6c907676e7c') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start <- ('d878ad8dfe43d92775e4e829529b2de5138c223b7f59b9515b808425e1d9776a') (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start -> None (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='roottestrenamecolumn_gw1', service='zoo1', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_zoo1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start -> None (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='roottestrenamecolumn_gw1', service='zoo3', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_zoo3_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start -> None (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='roottestrenamecolumn_gw1', service='zoo2', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_zoo2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:20:03 [ 410 ] DEBUG : Wait ZooKeeper to start (cluster.py:2504, wait_zookeeper_to_start) 2026-04-30 17:20:03 [ 410 ] DEBUG : get_instance_ip instance_name=zoo1 (cluster.py:2135, get_instance_ip) 2026-04-30 17:20:03 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestrenamecolumn_gw1_zoo1_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:03 [ 410 ] DEBUG : get_kazoo_client: zoo1, ip:172.16.8.2, port:2181, use_ssl:False (cluster.py:3286, get_kazoo_client) 2026-04-30 17:20:03 [ 410 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:20:03 [ 410 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:20:03 [ 410 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:20:03 [ 410 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:20:03 [ 410 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:20:03 [ 410 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:20:03 [ 410 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:20:03 [ 410 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:20:04 [ 410 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:20:04 [ 410 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:20:06 [ 410 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:20:06 [ 410 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2026-04-30 17:20:06 [ 410 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2026-04-30 17:20:06 [ 410 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2026-04-30 17:20:06 [ 410 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2026-04-30 17:20:06 [ 410 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2026-04-30 17:20:06 [ 410 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2026-04-30 17:20:06 [ 410 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2026-04-30 17:20:06 [ 410 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2026-04-30 17:20:06 [ 410 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2026-04-30 17:20:06 [ 410 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2026-04-30 17:20:06 [ 410 ] DEBUG : get_instance_ip instance_name=zoo2 (cluster.py:2135, get_instance_ip) 2026-04-30 17:20:06 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestrenamecolumn_gw1_zoo2_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:06 [ 410 ] DEBUG : get_kazoo_client: zoo2, ip:172.16.8.3, port:2181, use_ssl:False (cluster.py:3286, get_kazoo_client) 2026-04-30 17:20:06 [ 410 ] INFO : Connecting to 172.16.8.3(172.16.8.3):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:20:06 [ 410 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2026-04-30 17:20:06 [ 410 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2026-04-30 17:20:06 [ 410 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2026-04-30 17:20:06 [ 410 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2026-04-30 17:20:06 [ 410 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2026-04-30 17:20:06 [ 410 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2026-04-30 17:20:06 [ 410 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2026-04-30 17:20:06 [ 410 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2026-04-30 17:20:06 [ 410 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2026-04-30 17:20:06 [ 410 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2026-04-30 17:20:06 [ 410 ] DEBUG : get_instance_ip instance_name=zoo3 (cluster.py:2135, get_instance_ip) 2026-04-30 17:20:06 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestrenamecolumn_gw1_zoo3_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:06 [ 410 ] DEBUG : get_kazoo_client: zoo3, ip:172.16.8.4, port:2181, use_ssl:False (cluster.py:3286, get_kazoo_client) 2026-04-30 17:20:06 [ 410 ] INFO : Connecting to 172.16.8.4(172.16.8.4):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:20:06 [ 410 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2026-04-30 17:20:06 [ 410 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2026-04-30 17:20:06 [ 410 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2026-04-30 17:20:06 [ 410 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2026-04-30 17:20:06 [ 410 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2026-04-30 17:20:06 [ 410 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2026-04-30 17:20:06 [ 410 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2026-04-30 17:20:06 [ 410 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2026-04-30 17:20:06 [ 410 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2026-04-30 17:20:06 [ 410 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2026-04-30 17:20:06 [ 410 ] DEBUG : All instances of ZooKeeper Secure started (cluster.py:2519, wait_zookeeper_nodes_to_start) 2026-04-30 17:20:06 [ 410 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker-compose --env-file /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/.env --project-name roottestrenamecolumn_gw1 --file /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node3/docker-compose.yml --file /ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node4/docker-compose.yml up -d --no-recreate') (cluster.py:3146, start) 2026-04-30 17:20:06 [ 410 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/.env', '--project-name', 'roottestrenamecolumn_gw1', '--file', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node2/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node3/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_rename_column/_instances_0_gw1/node4/docker-compose.yml', 'up', '-d', '--no-recreate'] (cluster.py:113, run_and_check) 2026-04-30 17:20:15 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_node2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:20:15 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_node1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:20:15 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_node4_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:20:15 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_node3_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:20:15 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_node1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:20:15 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_node2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:20:15 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_node4_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:20:15 [ 410 ] DEBUG : Stderr:Creating roottestrenamecolumn_gw1_node3_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:20:15 [ 410 ] DEBUG : ClickHouse instance created (cluster.py:3154, start) 2026-04-30 17:20:15 [ 410 ] DEBUG : get_instance_ip instance_name=node1 (cluster.py:2135, get_instance_ip) 2026-04-30 17:20:15 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestrenamecolumn_gw1_node1_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:15 [ 410 ] DEBUG : Waiting for ClickHouse start in node1, ip: 172.16.8.6... (cluster.py:3161, start) 2026-04-30 17:20:15 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestrenamecolumn_gw1_node1_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:15 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:15 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:15 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:15 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:15 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:16 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:16 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:16 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:16 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:16 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:16 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:16 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:16 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:17 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:17 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:17 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:17 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:17 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:17 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:17 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:17 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:17 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:18 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:18 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:18 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:18 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:18 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:18 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:18 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:18 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:19 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:19 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:19 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:19 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:19 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:19 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:19 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:19 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:20 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:20 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:20 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:20 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:20 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:20 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:20 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:20 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:21 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:21 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:21 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:21 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:21 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:21 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:21 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:21 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:22 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:22 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:22 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:22 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:22 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:22 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:22 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:22 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:22 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:23 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:23 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:23 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:23 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:23 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:23 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:23 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:23 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:24 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:24 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:24 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:24 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:24 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:24 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:24 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:24 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/978c18f3f749fdde792c87149a481c3341b5ce702e643167d25b4a5761d2cfe7/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : ClickHouse node1 started (cluster.py:3165, start) 2026-04-30 17:20:25 [ 410 ] DEBUG : get_instance_ip instance_name=node2 (cluster.py:2135, get_instance_ip) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestrenamecolumn_gw1_node2_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : Waiting for ClickHouse start in node2, ip: 172.16.8.5... (cluster.py:3161, start) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestrenamecolumn_gw1_node2_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/000ff84be7f608f2d8e89cec55416eaf13b4ae6e506849a1aaaff6dad79fd2f6/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/000ff84be7f608f2d8e89cec55416eaf13b4ae6e506849a1aaaff6dad79fd2f6/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/000ff84be7f608f2d8e89cec55416eaf13b4ae6e506849a1aaaff6dad79fd2f6/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/000ff84be7f608f2d8e89cec55416eaf13b4ae6e506849a1aaaff6dad79fd2f6/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/000ff84be7f608f2d8e89cec55416eaf13b4ae6e506849a1aaaff6dad79fd2f6/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : ClickHouse node2 started (cluster.py:3165, start) 2026-04-30 17:20:25 [ 410 ] DEBUG : get_instance_ip instance_name=node3 (cluster.py:2135, get_instance_ip) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestrenamecolumn_gw1_node3_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : Waiting for ClickHouse start in node3, ip: 172.16.8.7... (cluster.py:3161, start) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestrenamecolumn_gw1_node3_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/bc176eacbd4412f614a5f15207a458ea70cb71bd8ffd1c0735862925506273db/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/bc176eacbd4412f614a5f15207a458ea70cb71bd8ffd1c0735862925506273db/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/bc176eacbd4412f614a5f15207a458ea70cb71bd8ffd1c0735862925506273db/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : ClickHouse node3 started (cluster.py:3165, start) 2026-04-30 17:20:25 [ 410 ] DEBUG : get_instance_ip instance_name=node4 (cluster.py:2135, get_instance_ip) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestrenamecolumn_gw1_node4_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : Waiting for ClickHouse start in node4, ip: 172.16.8.8... (cluster.py:3161, start) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestrenamecolumn_gw1_node4_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:25 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/e5b9848be7151dccfefcb41e09008da5defb56c56d262d82914b44fb7dc92dc5/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:26 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/e5b9848be7151dccfefcb41e09008da5defb56c56d262d82914b44fb7dc92dc5/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:26 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/e5b9848be7151dccfefcb41e09008da5defb56c56d262d82914b44fb7dc92dc5/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:26 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/e5b9848be7151dccfefcb41e09008da5defb56c56d262d82914b44fb7dc92dc5/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:26 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/e5b9848be7151dccfefcb41e09008da5defb56c56d262d82914b44fb7dc92dc5/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:26 [ 410 ] DEBUG : http://localhost:None "GET /v1.42/containers/e5b9848be7151dccfefcb41e09008da5defb56c56d262d82914b44fb7dc92dc5/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:20:26 [ 410 ] DEBUG : ClickHouse node4 started (cluster.py:3165, start) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:20:26 [ 410 ] DEBUG : Executing query CREATE TABLE test_rename_distributed_replicated ON CLUSTER test_cluster ( num UInt32, num2 UInt32 DEFAULT num + 1 ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/{shard}/test_rename_distributed_replicated', '{replica}') ORDER BY num PARTITION BY num % 100; on node1 (cluster.py:3602, query) 2026-04-30 17:20:28 [ 410 ] DEBUG : Executing query CREATE TABLE test_rename_distributed ON CLUSTER test_cluster AS test_rename_distributed_replicated ENGINE = Distributed(test_cluster, default, test_rename_distributed_replicated, rand()) on node1 (cluster.py:3602, query) 2026-04-30 17:20:30 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed (num,num2) SELECT number + 0 AS num, number + 1 + 0 AS num2 FROM numbers_mt(1000) on node1 (cluster.py:3602, query) 2026-04-30 17:22:28 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed ON CLUSTER test_cluster RENAME COLUMN num2 to foo2 on node1 (cluster.py:3602, query) 2026-04-30 17:22:38 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_replicated ON CLUSTER test_cluster RENAME COLUMN num2 to foo2 on node1 (cluster.py:3602, query) 2026-04-30 17:23:36 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_replicated ON CLUSTER test_cluster RENAME COLUMN num2 to foo2 on node1 (cluster.py:3602, query) 2026-04-30 17:23:40 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_distributed ON CLUSTER test_cluster SYNC on node1 (cluster.py:3602, query) 2026-04-30 17:24:18 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_distributed_replicated ON CLUSTER test_cluster SYNC on node1 (cluster.py:3602, query) _______________________ test_startup_with_small_bg_pool ________________________ [gw9] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_startup_with_small_bg_pool(started_cluster): start_clean_clickhouse() node.query("DROP TABLE IF EXISTS replicated_table SYNC") node.query( "CREATE TABLE replicated_table (k UInt64, i32 Int32) ENGINE=ReplicatedMergeTree('/clickhouse/replicated_table', 'r1') ORDER BY k" ) node.query("INSERT INTO replicated_table VALUES(20, 30)") def assert_values(): assert node.query("SELECT * FROM replicated_table") == "20\t30\n" assert_values() > node.restart_clickhouse(stop_start_wait_sec=10) test_replicated_table_attach/test.py:55: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:4055: in restart_clickhouse self.start_clickhouse(stop_start_wait_sec) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = start_wait_sec = 10, retry_start = True, expected_to_fail = False def start_clickhouse( self, start_wait_sec=60, retry_start=True, expected_to_fail=False ): if not self.stay_alive: raise Exception( "ClickHouse can be started again only with stay_alive=True instance" ) start_time = time.time() time_to_sleep = 0.5 while start_time + start_wait_sec >= time.time(): # sometimes after SIGKILL (hard reset) server may refuse to start for some time # for different reasons. pid = self.get_process_pid("clickhouse") if pid is None: logging.debug("No clickhouse process running. Start new one.") self.exec_in_container( ["bash", "-c", "{} --daemon".format(self.clickhouse_start_command)], user=str(os.getuid()), ) if expected_to_fail: self.wait_start_failed(start_wait_sec + start_time - time.time()) return time.sleep(1) continue else: logging.debug("Clickhouse process running.") if expected_to_fail: raise Exception("ClickHouse was expected not to be running.") try: self.wait_start(start_wait_sec + start_time - time.time()) return except Exception as e: logging.warning( f"Current start attempt failed. Will kill {pid} just in case." ) self.exec_in_container( ["bash", "-c", f"kill -9 {pid}"], user="root", nothrow=True ) if not retry_start: raise time.sleep(time_to_sleep) > raise Exception("Cannot start ClickHouse, see additional info in logs") E Exception: Cannot start ClickHouse, see additional info in logs helpers/cluster.py:3992: Exception ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ------------------------------ Captured log setup ------------------------------ 2026-04-30 17:24:37 [ 509 ] INFO : Running tests in /ClickHouse/tests/integration/test_replicated_table_attach/test.py (cluster.py:2788, start) 2026-04-30 17:24:37 [ 509 ] DEBUG : Cluster start called. is_up=False (cluster.py:2795, start) 2026-04-30 17:24:38 [ 509 ] DEBUG : Docker networks for project roottestreplicatedtableattach_gw9 are NETWORK ID NAME DRIVER SCOPE (cluster.py:855, print_all_docker_pieces) 2026-04-30 17:24:38 [ 509 ] DEBUG : Docker containers for project roottestreplicatedtableattach_gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:863, print_all_docker_pieces) 2026-04-30 17:24:38 [ 509 ] DEBUG : Docker volumes for project roottestreplicatedtableattach_gw9 are DRIVER VOLUME NAME (cluster.py:871, print_all_docker_pieces) 2026-04-30 17:24:38 [ 509 ] DEBUG : Cleanup called (cluster.py:876, cleanup) 2026-04-30 17:24:38 [ 509 ] DEBUG : Docker networks for project roottestreplicatedtableattach_gw9 are NETWORK ID NAME DRIVER SCOPE (cluster.py:855, print_all_docker_pieces) 2026-04-30 17:24:38 [ 509 ] DEBUG : Docker containers for project roottestreplicatedtableattach_gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:863, print_all_docker_pieces) 2026-04-30 17:24:39 [ 509 ] DEBUG : Docker volumes for project roottestreplicatedtableattach_gw9 are DRIVER VOLUME NAME (cluster.py:871, print_all_docker_pieces) 2026-04-30 17:24:39 [ 509 ] DEBUG : Command:docker container list --all --filter name='^/roottestreplicatedtableattach_gw9_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2026-04-30 17:24:39 [ 509 ] DEBUG : Unstopped containers: {} (cluster.py:890, cleanup) 2026-04-30 17:24:39 [ 509 ] DEBUG : No running containers for project: roottestreplicatedtableattach_gw9 (cluster.py:904, cleanup) 2026-04-30 17:24:39 [ 509 ] DEBUG : Trying to prune unused networks... (cluster.py:910, cleanup) 2026-04-30 17:24:39 [ 509 ] DEBUG : Trying to prune unused images... (cluster.py:926, cleanup) 2026-04-30 17:24:39 [ 509 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2026-04-30 17:24:39 [ 509 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:121, run_and_check) 2026-04-30 17:24:39 [ 509 ] DEBUG : Images pruned (cluster.py:929, cleanup) 2026-04-30 17:24:39 [ 509 ] DEBUG : Trying to prune unused volumes... (cluster.py:935, cleanup) 2026-04-30 17:24:39 [ 509 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2026-04-30 17:24:39 [ 509 ] DEBUG : Stdout:2 (cluster.py:121, run_and_check) 2026-04-30 17:24:39 [ 509 ] DEBUG : Setup directory for instance: node (cluster.py:2808, start) 2026-04-30 17:24:39 [ 509 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4534, create_dir) 2026-04-30 17:24:39 [ 509 ] DEBUG : Create directory for common tests configuration (cluster.py:4539, create_dir) 2026-04-30 17:24:39 [ 509 ] DEBUG : Copy common configuration from helpers (cluster.py:4559, create_dir) 2026-04-30 17:24:39 [ 509 ] DEBUG : Generate and write macros file (cluster.py:4602, create_dir) 2026-04-30 17:24:39 [ 509 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_replicated_table_attach/configs/config.xml'] to /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/configs/config.d (cluster.py:4632, create_dir) 2026-04-30 17:24:40 [ 509 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/database (cluster.py:4649, create_dir) 2026-04-30 17:24:40 [ 509 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/logs (cluster.py:4660, create_dir) 2026-04-30 17:24:40 [ 509 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon; coproc tail -f /dev/null; wait $$!" (cluster.py:4746, create_dir) 2026-04-30 17:24:40 [ 509 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:1e0b53d756cf', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/.env (cluster.py:86, _create_env_file) 2026-04-30 17:24:40 [ 509 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2026-04-30 17:24:40 [ 509 ] DEBUG : No config file found (config.py:28, find_config_file) 2026-04-30 17:24:40 [ 509 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2026-04-30 17:24:40 [ 509 ] DEBUG : No config file found (config.py:28, find_config_file) 2026-04-30 17:24:40 [ 509 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 824 (connectionpool.py:547, _make_request) 2026-04-30 17:24:40 [ 509 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/.env', '--project-name', 'roottestreplicatedtableattach_gw9', '--file', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', 'pull'] (cluster.py:113, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo1 ... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo3 ... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo2 ... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling node ... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo2 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling node ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo3 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo3 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo3 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo3 ... done (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo2 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo2 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo1 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo1 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo1 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo2 ... done (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling node ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling node ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling zoo1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Stderr:Pulling node ... done (cluster.py:123, run_and_check) 2026-04-30 17:25:25 [ 509 ] DEBUG : Setup ZooKeeper (cluster.py:2849, start) 2026-04-30 17:25:25 [ 509 ] DEBUG : Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/log', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/config', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/coordination', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/log', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/config', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/coordination', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/log', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/config', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/coordination'] (cluster.py:2850, start) 2026-04-30 17:25:25 [ 509 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/.env', '--project-name', 'roottestreplicatedtableattach_gw9', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--verbose', 'up', '-d'] (cluster.py:113, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.config.config.find: Using configuration files: /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.docker_client.get_client: docker-compose version 1.29.2, build unknown (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:docker-py version: (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:CPython version: 3.10.12 (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:OpenSSL version: OpenSSL 3.0.2 15 Mar 2022 (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.docker_client.get_client: Docker base_url: http+docker://localhost (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.docker_client.get_client: Docker version: Platform={'Name': 'Docker Engine - Community'}, Components=[{'Name': 'Engine', 'Version': '23.0.6', 'Details': {'ApiVersion': '1.42', 'Arch': 'amd64', 'BuildTime': '2023-05-05T21:18:13.000000000+00:00', 'Experimental': 'false', 'GitCommit': '9dbdbd4', 'GoVersion': 'go1.19.9', 'KernelVersion': '5.15.0-130-generic', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': '1.7.25', 'Details': {'GitCommit': 'bcc810d6b9066471b0b6fa75f557a15a1cbf31bb'}}, {'Name': 'runc', 'Version': '1.2.4', 'Details': {'GitCommit': 'v1.2.4-0-g6c52b3f'}}, {'Name': 'docker-init', 'Version': '0.19.0', 'Details': {'GitCommit': 'de40ad0'}}], Version=23.0.6, ApiVersion=1.42, MinAPIVersion=1.12, GitCommit=9dbdbd4, GoVersion=go1.19.9, Os=linux, Arch=amd64, KernelVersion=5.15.0-130-generic, BuildTime=2023-05-05T21:18:13.000000000+00:00 (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottestreplicatedtableattachgw9_default') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker info <- () (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker info -> {'Architecture': 'x86_64', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'BridgeNfIp6tables': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'BridgeNfIptables': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'CPUSet': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'CPUShares': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'CgroupDriver': 'cgroupfs', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'CgroupVersion': '2', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'ContainerdCommit': {'Expected': 'bcc810d6b9066471b0b6fa75f557a15a1cbf31bb', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'ID': 'bcc810d6b9066471b0b6fa75f557a15a1cbf31bb'}, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Containers': 51, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottestreplicatedtableattach_gw9_default') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.network.ensure: Creating network "roottestreplicatedtableattach_gw9_default" with the default driver (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_network <- (name='roottestreplicatedtableattach_gw9_default', driver=None, options=None, ipam=None, internal=False, enable_ipv6=False, labels={'com.docker.compose.project': 'roottestreplicatedtableattach_gw9', 'com.docker.compose.network': 'default', 'com.docker.compose.version': '1.29.2'}, attachable=True, check_duplicate=True) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_network -> {'Id': '50a8d192d0500e439c5d93882c28bf975f94636b3bc4084829f1d07fb443cbab', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Warning': ''} (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattach_gw9', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattachgw9', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattach_gw9', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattachgw9', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattach_gw9', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattachgw9', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattach_gw9', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattachgw9', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattach_gw9', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattachgw9', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattach_gw9', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattachgw9', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattach_gw9', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattachgw9', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattach_gw9', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattachgw9', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {, , } (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattach_gw9', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattach_gw9', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattach_gw9', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattachgw9', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattachgw9', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicatedtableattachgw9', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:Creating roottestreplicatedtableattach_gw9_zoo1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:Creating roottestreplicatedtableattach_gw9_zoo2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {ServiceName(project='roottestreplicatedtableattach_gw9', service='zoo2', number=1)} (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for ServiceName(project='roottestreplicatedtableattach_gw9', service='zoo2', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {ServiceName(project='roottestreplicatedtableattach_gw9', service='zoo1', number=1)} (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for ServiceName(project='roottestreplicatedtableattach_gw9', service='zoo1', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:Creating roottestreplicatedtableattach_gw9_zoo3_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {ServiceName(project='roottestreplicatedtableattach_gw9', service='zoo3', number=1)} (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for ServiceName(project='roottestreplicatedtableattach_gw9', service='zoo3', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.service.build_container_labels: Added config hash: 098b6ebec28ef9bc9ceb23009e414d396ca70629e97575cf42f1fe95590d06f9 (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (links=[], port_bindings={}, binds=[], volumes_from=[], privileged=False, network_mode='roottestreplicatedtableattach_gw9_default', devices=None, device_requests=None, dns=None, dns_opt=['attempts:2', 'timeout:1', 'inet6', 'rotate'], dns_search=None, restart_policy={'Name': 'always', 'MaximumRetryCount': 0}, runtime=None, cap_add=['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], cap_drop=None, mem_limit=None, mem_reservation=None, memswap_limit=None, ulimits=None, log_config={'Type': '', 'Config': {}}, extra_hosts=None, read_only=None, pid_mode=None, security_opt=['label:disable'], ipc_mode=None, cgroup_parent=None, cpu_quota=None, shm_size=None, sysctls=None, pids_limit=None, tmpfs=None, oom_kill_disable=None, oom_score_adj=None, mem_swappiness=None, group_add=None, userns_mode=None, init=None, init_path=None, isolation=None, cpu_count=None, cpu_percent=None, nano_cpus=None, volume_driver=None, cpuset_cpus=None, cpu_shares=None, storage_opt=None, blkio_weight=None, blkio_weight_device=None, device_read_bps=None, device_read_iops=None, device_write_bps=None, device_write_iops=None, mounts=[{'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}], device_cgroup_rules=None, cpu_period=None, cpu_rt_period=None, cpu_rt_runtime=None) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.service.build_container_labels: Added config hash: 34bb023f16136b1e9c9f42cd12a815531faf32cf9bc94aabfaa1ec1987fcb21c (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (links=[], port_bindings={}, binds=[], volumes_from=[], privileged=False, network_mode='roottestreplicatedtableattach_gw9_default', devices=None, device_requests=None, dns=None, dns_opt=['attempts:2', 'timeout:1', 'inet6', 'rotate'], dns_search=None, restart_policy={'Name': 'always', 'MaximumRetryCount': 0}, runtime=None, cap_add=['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], cap_drop=None, mem_limit=None, mem_reservation=None, memswap_limit=None, ulimits=None, log_config={'Type': '', 'Config': {}}, extra_hosts=None, read_only=None, pid_mode=None, security_opt=['label:disable'], ipc_mode=None, cgroup_parent=None, cpu_quota=None, shm_size=None, sysctls=None, pids_limit=None, tmpfs=None, oom_kill_disable=None, oom_score_adj=None, mem_swappiness=None, group_add=None, userns_mode=None, init=None, init_path=None, isolation=None, cpu_count=None, cpu_percent=None, nano_cpus=None, volume_driver=None, cpuset_cpus=None, cpu_shares=None, storage_opt=None, blkio_weight=None, blkio_weight_device=None, device_read_bps=None, device_read_iops=None, device_write_bps=None, device_write_iops=None, mounts=[{'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/log', 'Type': 'bind', 'ReadOnly': None}], device_cgroup_rules=None, cpu_period=None, cpu_rt_period=None, cpu_rt_runtime=None) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Links': [], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'LogConfig': {'Config': {}, 'Type': ''}, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Mounts': [{'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/log', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Target': '/var/log/clickhouse-keeper', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Type': 'bind'}, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: {'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container <- (entrypoint='clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config2.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log', image='altinityinfra/integration-test:1e0b53d756cf', user='0', volumes={}, name='roottestreplicatedtableattach_gw9_zoo2_1', detach=True, environment=[], labels={'com.docker.compose.project': 'roottestreplicatedtableattach_gw9', 'com.docker.compose.service': 'zoo2', 'com.docker.compose.oneoff': 'False', 'com.docker.compose.project.working_dir': '/ClickHouse/tests/integration/compose', 'com.docker.compose.project.config_files': '/ClickHouse/tests/integration/compose/docker_compose_keeper.yml', 'com.docker.compose.project.environment_file': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/.env', 'com.docker.compose.container-number': '1', 'com.docker.compose.version': '1.29.2', 'com.docker.compose.config-hash': '098b6ebec28ef9bc9ceb23009e414d396ca70629e97575cf42f1fe95590d06f9'}, host_config={'NetworkMode': 'roottestreplicatedtableattach_gw9_default', 'RestartPolicy': {'Name': 'always', 'MaximumRetryCount': 0}, 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], 'SecurityOpt': ['label:disable'], 'VolumesFrom': [], 'Binds': [], 'PortBindings': {}, 'Links': [], 'LogConfig': {'Type': '', 'Config': {}}, 'Mounts': [{'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper2/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}]}, networking_config={'EndpointsConfig': {'roottestreplicatedtableattach_gw9_default': {'Aliases': ['zoo2'], 'IPAMConfig': {}}}}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Links': [], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'LogConfig': {'Config': {}, 'Type': ''}, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Mounts': [{'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/coordination', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Target': '/var/lib/clickhouse', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Type': 'bind'}, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: {'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container <- (entrypoint='clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config1.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log', image='altinityinfra/integration-test:1e0b53d756cf', user='0', volumes={}, name='roottestreplicatedtableattach_gw9_zoo1_1', detach=True, environment=[], labels={'com.docker.compose.project': 'roottestreplicatedtableattach_gw9', 'com.docker.compose.service': 'zoo1', 'com.docker.compose.oneoff': 'False', 'com.docker.compose.project.working_dir': '/ClickHouse/tests/integration/compose', 'com.docker.compose.project.config_files': '/ClickHouse/tests/integration/compose/docker_compose_keeper.yml', 'com.docker.compose.project.environment_file': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/.env', 'com.docker.compose.container-number': '1', 'com.docker.compose.version': '1.29.2', 'com.docker.compose.config-hash': '34bb023f16136b1e9c9f42cd12a815531faf32cf9bc94aabfaa1ec1987fcb21c'}, host_config={'NetworkMode': 'roottestreplicatedtableattach_gw9_default', 'RestartPolicy': {'Name': 'always', 'MaximumRetryCount': 0}, 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], 'SecurityOpt': ['label:disable'], 'VolumesFrom': [], 'Binds': [], 'PortBindings': {}, 'Links': [], 'LogConfig': {'Type': '', 'Config': {}}, 'Mounts': [{'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/log', 'Type': 'bind', 'ReadOnly': None}]}, networking_config={'EndpointsConfig': {'roottestreplicatedtableattach_gw9_default': {'Aliases': ['zoo1'], 'IPAMConfig': {}}}}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.service.build_container_labels: Added config hash: 3035e04c97d9ce5843c19fd1713a6a83e495c2206f07e602cd6b3e88b7d50a71 (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (links=[], port_bindings={}, binds=[], volumes_from=[], privileged=False, network_mode='roottestreplicatedtableattach_gw9_default', devices=None, device_requests=None, dns=None, dns_opt=['attempts:2', 'timeout:1', 'inet6', 'rotate'], dns_search=None, restart_policy={'Name': 'always', 'MaximumRetryCount': 0}, runtime=None, cap_add=['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], cap_drop=None, mem_limit=None, mem_reservation=None, memswap_limit=None, ulimits=None, log_config={'Type': '', 'Config': {}}, extra_hosts=None, read_only=None, pid_mode=None, security_opt=['label:disable'], ipc_mode=None, cgroup_parent=None, cpu_quota=None, shm_size=None, sysctls=None, pids_limit=None, tmpfs=None, oom_kill_disable=None, oom_score_adj=None, mem_swappiness=None, group_add=None, userns_mode=None, init=None, init_path=None, isolation=None, cpu_count=None, cpu_percent=None, nano_cpus=None, volume_driver=None, cpuset_cpus=None, cpu_shares=None, storage_opt=None, blkio_weight=None, blkio_weight_device=None, device_read_bps=None, device_read_iops=None, device_write_bps=None, device_write_iops=None, mounts=[{'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/config', 'Type': 'bind', 'ReadOnly': None}], device_cgroup_rules=None, cpu_period=None, cpu_rt_period=None, cpu_rt_runtime=None) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Links': [], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'LogConfig': {'Config': {}, 'Type': ''}, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Mounts': [{'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/coordination', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Target': '/var/lib/clickhouse-keeper', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Type': 'bind'}, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: {'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container <- (entrypoint='clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config3.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log', image='altinityinfra/integration-test:1e0b53d756cf', user='0', volumes={}, name='roottestreplicatedtableattach_gw9_zoo3_1', detach=True, environment=[], labels={'com.docker.compose.project': 'roottestreplicatedtableattach_gw9', 'com.docker.compose.service': 'zoo3', 'com.docker.compose.oneoff': 'False', 'com.docker.compose.project.working_dir': '/ClickHouse/tests/integration/compose', 'com.docker.compose.project.config_files': '/ClickHouse/tests/integration/compose/docker_compose_keeper.yml', 'com.docker.compose.project.environment_file': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/.env', 'com.docker.compose.container-number': '1', 'com.docker.compose.version': '1.29.2', 'com.docker.compose.config-hash': '3035e04c97d9ce5843c19fd1713a6a83e495c2206f07e602cd6b3e88b7d50a71'}, host_config={'NetworkMode': 'roottestreplicatedtableattach_gw9_default', 'RestartPolicy': {'Name': 'always', 'MaximumRetryCount': 0}, 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], 'SecurityOpt': ['label:disable'], 'VolumesFrom': [], 'Binds': [], 'PortBindings': {}, 'Links': [], 'LogConfig': {'Type': '', 'Config': {}}, 'Mounts': [{'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/keeper3/config', 'Type': 'bind', 'ReadOnly': None}]}, networking_config={'EndpointsConfig': {'roottestreplicatedtableattach_gw9_default': {'Aliases': ['zoo3'], 'IPAMConfig': {}}}}) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': 'd6cb5ab16dfa3c2bdde3c1b23e5f8d1cb755584fd6eb690bb3a01217bf3c174d', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Warnings': []} (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('d6cb5ab16dfa3c2bdde3c1b23e5f8d1cb755584fd6eb690bb3a01217bf3c174d') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': '7f17417b605028824dded0a902ad609ddb722a32c21bfe139528f09ce35b48b8', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Warnings': []} (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('7f17417b605028824dded0a902ad609ddb722a32c21bfe139528f09ce35b48b8') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Args': ['keeper', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: '--config=/etc/clickhouse-keeper/keeper_config3.xml', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: '--log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: '--errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Cmd': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': '73fe503cb399bbc80632e6263d394b257f465dc4d948a32461cd85dd4f7e3626', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Warnings': []} (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('73fe503cb399bbc80632e6263d394b257f465dc4d948a32461cd85dd4f7e3626') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Args': ['keeper', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: '--config=/etc/clickhouse-keeper/keeper_config1.xml', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: '--log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: '--errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Cmd': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('7f17417b605028824dded0a902ad609ddb722a32c21bfe139528f09ce35b48b8', 'roottestreplicatedtableattach_gw9_default') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('d6cb5ab16dfa3c2bdde3c1b23e5f8d1cb755584fd6eb690bb3a01217bf3c174d', 'roottestreplicatedtableattach_gw9_default') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Args': ['keeper', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: '--config=/etc/clickhouse-keeper/keeper_config2.xml', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: '--log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: '--errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log'], (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Cmd': None, (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('73fe503cb399bbc80632e6263d394b257f465dc4d948a32461cd85dd4f7e3626', 'roottestreplicatedtableattach_gw9_default') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('7f17417b605028824dded0a902ad609ddb722a32c21bfe139528f09ce35b48b8', 'roottestreplicatedtableattach_gw9_default', aliases=['7f17417b6050', 'zoo1'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('d6cb5ab16dfa3c2bdde3c1b23e5f8d1cb755584fd6eb690bb3a01217bf3c174d', 'roottestreplicatedtableattach_gw9_default', aliases=['zoo3', 'd6cb5ab16dfa'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('73fe503cb399bbc80632e6263d394b257f465dc4d948a32461cd85dd4f7e3626', 'roottestreplicatedtableattach_gw9_default', aliases=['73fe503cb399', 'zoo2'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start <- ('7f17417b605028824dded0a902ad609ddb722a32c21bfe139528f09ce35b48b8') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start <- ('73fe503cb399bbc80632e6263d394b257f465dc4d948a32461cd85dd4f7e3626') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start <- ('d6cb5ab16dfa3c2bdde3c1b23e5f8d1cb755584fd6eb690bb3a01217bf3c174d') (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start -> None (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='roottestreplicatedtableattach_gw9', service='zoo2', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:Creating roottestreplicatedtableattach_gw9_zoo2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start -> None (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='roottestreplicatedtableattach_gw9', service='zoo3', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:Creating roottestreplicatedtableattach_gw9_zoo3_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start -> None (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='roottestreplicatedtableattach_gw9', service='zoo1', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:Creating roottestreplicatedtableattach_gw9_zoo1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:25:33 [ 509 ] DEBUG : Wait ZooKeeper to start (cluster.py:2504, wait_zookeeper_to_start) 2026-04-30 17:25:33 [ 509 ] DEBUG : get_instance_ip instance_name=zoo1 (cluster.py:2135, get_instance_ip) 2026-04-30 17:25:33 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicatedtableattach_gw9_zoo1_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:33 [ 509 ] DEBUG : get_kazoo_client: zoo1, ip:172.16.18.4, port:2181, use_ssl:False (cluster.py:3286, get_kazoo_client) 2026-04-30 17:25:33 [ 509 ] INFO : Connecting to 172.16.18.4(172.16.18.4):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:25:33 [ 509 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:25:33 [ 509 ] INFO : Connecting to 172.16.18.4(172.16.18.4):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:25:33 [ 509 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:25:33 [ 509 ] INFO : Connecting to 172.16.18.4(172.16.18.4):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:25:33 [ 509 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:25:34 [ 509 ] INFO : Connecting to 172.16.18.4(172.16.18.4):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:25:34 [ 509 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:25:34 [ 509 ] INFO : Connecting to 172.16.18.4(172.16.18.4):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:25:34 [ 509 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:25:35 [ 509 ] INFO : Connecting to 172.16.18.4(172.16.18.4):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:25:35 [ 509 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:25:37 [ 509 ] INFO : Connecting to 172.16.18.4(172.16.18.4):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:25:37 [ 509 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2026-04-30 17:25:37 [ 509 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2026-04-30 17:25:37 [ 509 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2026-04-30 17:25:37 [ 509 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2026-04-30 17:25:37 [ 509 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2026-04-30 17:25:37 [ 509 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2026-04-30 17:25:37 [ 509 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2026-04-30 17:25:37 [ 509 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2026-04-30 17:25:37 [ 509 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2026-04-30 17:25:37 [ 509 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2026-04-30 17:25:37 [ 509 ] DEBUG : get_instance_ip instance_name=zoo2 (cluster.py:2135, get_instance_ip) 2026-04-30 17:25:37 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicatedtableattach_gw9_zoo2_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:37 [ 509 ] DEBUG : get_kazoo_client: zoo2, ip:172.16.18.3, port:2181, use_ssl:False (cluster.py:3286, get_kazoo_client) 2026-04-30 17:25:37 [ 509 ] INFO : Connecting to 172.16.18.3(172.16.18.3):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:25:37 [ 509 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2026-04-30 17:25:37 [ 509 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2026-04-30 17:25:37 [ 509 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2026-04-30 17:25:37 [ 509 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2026-04-30 17:25:37 [ 509 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2026-04-30 17:25:37 [ 509 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2026-04-30 17:25:37 [ 509 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2026-04-30 17:25:37 [ 509 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2026-04-30 17:25:37 [ 509 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2026-04-30 17:25:37 [ 509 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2026-04-30 17:25:37 [ 509 ] DEBUG : get_instance_ip instance_name=zoo3 (cluster.py:2135, get_instance_ip) 2026-04-30 17:25:37 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicatedtableattach_gw9_zoo3_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:37 [ 509 ] DEBUG : get_kazoo_client: zoo3, ip:172.16.18.2, port:2181, use_ssl:False (cluster.py:3286, get_kazoo_client) 2026-04-30 17:25:37 [ 509 ] INFO : Connecting to 172.16.18.2(172.16.18.2):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:25:37 [ 509 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2026-04-30 17:25:37 [ 509 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2026-04-30 17:25:37 [ 509 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2026-04-30 17:25:37 [ 509 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2026-04-30 17:25:37 [ 509 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2026-04-30 17:25:37 [ 509 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2026-04-30 17:25:37 [ 509 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2026-04-30 17:25:37 [ 509 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2026-04-30 17:25:37 [ 509 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2026-04-30 17:25:37 [ 509 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2026-04-30 17:25:37 [ 509 ] DEBUG : All instances of ZooKeeper Secure started (cluster.py:2519, wait_zookeeper_nodes_to_start) 2026-04-30 17:25:37 [ 509 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker-compose --env-file /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/.env --project-name roottestreplicatedtableattach_gw9 --file /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate') (cluster.py:3146, start) 2026-04-30 17:25:37 [ 509 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/.env', '--project-name', 'roottestreplicatedtableattach_gw9', '--file', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', 'up', '-d', '--no-recreate'] (cluster.py:113, run_and_check) 2026-04-30 17:25:43 [ 509 ] DEBUG : Stderr:Creating roottestreplicatedtableattach_gw9_node_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:25:43 [ 509 ] DEBUG : Stderr:Creating roottestreplicatedtableattach_gw9_node_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:25:43 [ 509 ] DEBUG : ClickHouse instance created (cluster.py:3154, start) 2026-04-30 17:25:43 [ 509 ] DEBUG : get_instance_ip instance_name=node (cluster.py:2135, get_instance_ip) 2026-04-30 17:25:43 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicatedtableattach_gw9_node_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:43 [ 509 ] DEBUG : Waiting for ClickHouse start in node, ip: 172.16.18.5... (cluster.py:3161, start) 2026-04-30 17:25:43 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicatedtableattach_gw9_node_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:43 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:43 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:43 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:44 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:44 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:44 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:44 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:44 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:44 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:44 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:44 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:44 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:45 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:45 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:45 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:45 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:45 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:45 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:45 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:45 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:46 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:46 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:46 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:46 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:46 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:46 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:46 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:46 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:46 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:47 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:47 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:47 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:47 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:47 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:47 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:47 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:47 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:47 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:48 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:48 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:48 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:48 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:48 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:48 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:48 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:49 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:49 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:49 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:49 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:49 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:49 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:49 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:49 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:50 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:50 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:50 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:50 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:50 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:50 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:50 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:50 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:50 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:51 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:51 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:51 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:51 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:51 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:51 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:51 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:51 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:52 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:52 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:52 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:52 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:52 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:52 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:52 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:52 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:52 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:52 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:53 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:53 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:53 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:53 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:53 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:53 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:53 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:53 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:54 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:54 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:54 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:54 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:54 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:54 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:54 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:54 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:54 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:55 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:55 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:55 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:55 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:55 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:55 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:55 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:55 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:56 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/dc36f9902f47ce3526decdb065dfed567aea153e1acd01292bd9e109e0473b8d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:25:56 [ 509 ] DEBUG : ClickHouse node started (cluster.py:3165, start) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:25:56 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', 'ls /etc/clickhouse-server/config.d'] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:56 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', 'ls /etc/clickhouse-server/config.d'] (cluster.py:113, run_and_check) 2026-04-30 17:25:57 [ 509 ] DEBUG : Stdout:0_common_enable_keeper_async_replication.xml (cluster.py:121, run_and_check) 2026-04-30 17:25:57 [ 509 ] DEBUG : Stdout:0_common_instance_config.xml (cluster.py:121, run_and_check) 2026-04-30 17:25:57 [ 509 ] DEBUG : Stdout:config.xml (cluster.py:121, run_and_check) 2026-04-30 17:25:57 [ 509 ] DEBUG : Executing query DROP TABLE IF EXISTS replicated_table SYNC on node (cluster.py:3602, query) 2026-04-30 17:25:58 [ 509 ] DEBUG : Executing query CREATE TABLE replicated_table (k UInt64, i32 Int32) ENGINE=ReplicatedMergeTree('/clickhouse/replicated_table', 'r1') ORDER BY k on node (cluster.py:3602, query) 2026-04-30 17:25:59 [ 509 ] DEBUG : Executing query INSERT INTO replicated_table VALUES(20, 30) on node (cluster.py:3602, query) 2026-04-30 17:26:01 [ 509 ] DEBUG : Executing query SELECT * FROM replicated_table on node (cluster.py:3602, query) 2026-04-30 17:26:02 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:02 [ 509 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:26:04 [ 509 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2026-04-30 17:26:04 [ 509 ] DEBUG : Stdout: 8 ? 00:00:07 clickhouse (cluster.py:121, run_and_check) 2026-04-30 17:26:04 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:04 [ 509 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', 'pkill clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:26:05 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:05 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:06 [ 509 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:26:07 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:07 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:08 [ 509 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:26:09 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:09 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:09 [ 509 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:26:10 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:10 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:12 [ 509 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:26:13 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:13 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:14 [ 509 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:26:15 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:15 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:16 [ 509 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:26:16 [ 509 ] WARNING : Force kill clickhouse in stop_clickhouse. ps:8 (cluster.py:3926, stop_clickhouse) 2026-04-30 17:26:16 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 8 > /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/logs/stdout.log"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:16 [ 509 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 8 > /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/logs/stdout.log"] (cluster.py:113, run_and_check) 2026-04-30 17:26:17 [ 509 ] DEBUG : Stderr:bash: line 1: /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/logs/stdout.log: No such file or directory (cluster.py:123, run_and_check) 2026-04-30 17:26:17 [ 509 ] DEBUG : Exitcode:1 (cluster.py:125, run_and_check) 2026-04-30 17:26:17 [ 509 ] WARNING : Stop ClickHouse raised an error Command ['docker', 'exec', '-u', 'root', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 8 > /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/logs/stdout.log"] return non-zero code 1: bash: line 1: /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/logs/stdout.log: No such file or directory (cluster.py:3947, stop_clickhouse) 2026-04-30 17:26:17 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:17 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:18 [ 509 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:26:18 [ 509 ] DEBUG : Clickhouse process running. (cluster.py:3975, start_clickhouse) 2026-04-30 17:26:18 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:18 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:19 [ 509 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:26:19 [ 509 ] DEBUG : Executing query select 20 on node (cluster.py:3602, query) 2026-04-30 17:26:20 [ 509 ] DEBUG : Executing query select 20 on node (cluster.py:3602, query) 2026-04-30 17:26:21 [ 509 ] DEBUG : Executing query select 20 on node (cluster.py:3602, query) 2026-04-30 17:26:23 [ 509 ] DEBUG : Executing query select 20 on node (cluster.py:3602, query) 2026-04-30 17:26:24 [ 509 ] DEBUG : Executing query select 20 on node (cluster.py:3602, query) 2026-04-30 17:26:25 [ 509 ] DEBUG : Executing query select 20 on node (cluster.py:3602, query) 2026-04-30 17:26:26 [ 509 ] DEBUG : Executing query select 20 on node (cluster.py:3602, query) 2026-04-30 17:26:29 [ 509 ] DEBUG : Executing query select 20 on node (cluster.py:3602, query) 2026-04-30 17:26:32 [ 509 ] DEBUG : Executing query select 20 on node (cluster.py:3602, query) 2026-04-30 17:26:33 [ 509 ] DEBUG : Executing query select 20 on node (cluster.py:3602, query) 2026-04-30 17:26:35 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:35 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:36 [ 509 ] WARNING : Current start attempt failed. Will kill 8 just in case. (cluster.py:3982, start_clickhouse) 2026-04-30 17:26:36 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 8'] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:36 [ 509 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', 'kill -9 8'] (cluster.py:113, run_and_check) 2026-04-30 17:26:37 [ 509 ] DEBUG : Stderr:bash: line 1: kill: (8) - No such process (cluster.py:123, run_and_check) 2026-04-30 17:26:37 [ 509 ] DEBUG : Exitcode:1 (cluster.py:125, run_and_check) _________________ test_startup_with_small_bg_pool_partitioned __________________ [gw9] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_startup_with_small_bg_pool_partitioned(started_cluster): start_clean_clickhouse() > node.query("DROP TABLE IF EXISTS replicated_table_partitioned SYNC") test_replicated_table_attach/test.py:61: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.18.5:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ------------------------------ Captured log call ------------------------------- 2026-04-30 17:26:40 [ 509 ] DEBUG : run container_id:roottestreplicatedtableattach_gw9_node_1 detach:False nothrow:False cmd: ['bash', '-c', 'ls /etc/clickhouse-server/config.d'] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:40 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestreplicatedtableattach_gw9_node_1', 'bash', '-c', 'ls /etc/clickhouse-server/config.d'] (cluster.py:113, run_and_check) 2026-04-30 17:26:42 [ 509 ] DEBUG : Stdout:0_common_enable_keeper_async_replication.xml (cluster.py:121, run_and_check) 2026-04-30 17:26:42 [ 509 ] DEBUG : Stdout:0_common_instance_config.xml (cluster.py:121, run_and_check) 2026-04-30 17:26:42 [ 509 ] DEBUG : Stdout:config.xml (cluster.py:121, run_and_check) 2026-04-30 17:26:42 [ 509 ] DEBUG : Executing query DROP TABLE IF EXISTS replicated_table_partitioned SYNC on node (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:26:48 [ 509 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/.env', '--project-name', 'roottestreplicatedtableattach_gw9', '--file', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', 'stop', '--timeout', '20'] (cluster.py:113, run_and_check) 2026-04-30 17:26:59 [ 509 ] DEBUG : Stderr:Stopping roottestreplicatedtableattach_gw9_node_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:26:59 [ 509 ] DEBUG : Stderr:Stopping roottestreplicatedtableattach_gw9_zoo3_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:26:59 [ 509 ] DEBUG : Stderr:Stopping roottestreplicatedtableattach_gw9_zoo2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:26:59 [ 509 ] DEBUG : Stderr:Stopping roottestreplicatedtableattach_gw9_zoo1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:26:59 [ 509 ] DEBUG : Stderr:Stopping roottestreplicatedtableattach_gw9_node_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:26:59 [ 509 ] DEBUG : Stderr:Stopping roottestreplicatedtableattach_gw9_zoo1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:26:59 [ 509 ] DEBUG : Stderr:Stopping roottestreplicatedtableattach_gw9_zoo3_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:26:59 [ 509 ] DEBUG : Stderr:Stopping roottestreplicatedtableattach_gw9_zoo2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:26:59 [ 509 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:26:59 [ 509 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/.env', '--project-name', 'roottestreplicatedtableattach_gw9', '--file', '/ClickHouse/tests/integration/test_replicated_table_attach/_instances_0_gw9/node/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', 'down', '--volumes'] (cluster.py:113, run_and_check) 2026-04-30 17:27:06 [ 509 ] DEBUG : Stderr:Removing roottestreplicatedtableattach_gw9_node_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:27:06 [ 509 ] DEBUG : Stderr:Removing roottestreplicatedtableattach_gw9_zoo3_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:27:06 [ 509 ] DEBUG : Stderr:Removing roottestreplicatedtableattach_gw9_zoo2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:27:06 [ 509 ] DEBUG : Stderr:Removing roottestreplicatedtableattach_gw9_zoo1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:27:06 [ 509 ] DEBUG : Stderr:Removing roottestreplicatedtableattach_gw9_zoo1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:27:06 [ 509 ] DEBUG : Stderr:Removing roottestreplicatedtableattach_gw9_zoo2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:27:06 [ 509 ] DEBUG : Stderr:Removing roottestreplicatedtableattach_gw9_zoo3_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:27:06 [ 509 ] DEBUG : Stderr:Removing roottestreplicatedtableattach_gw9_node_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:27:06 [ 509 ] DEBUG : Stderr:Removing network roottestreplicatedtableattach_gw9_default (cluster.py:123, run_and_check) 2026-04-30 17:27:06 [ 509 ] DEBUG : Cleanup called (cluster.py:876, cleanup) 2026-04-30 17:27:06 [ 509 ] DEBUG : Docker networks for project roottestreplicatedtableattach_gw9 are NETWORK ID NAME DRIVER SCOPE (cluster.py:855, print_all_docker_pieces) 2026-04-30 17:27:06 [ 509 ] DEBUG : Docker containers for project roottestreplicatedtableattach_gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:863, print_all_docker_pieces) 2026-04-30 17:27:06 [ 509 ] DEBUG : Docker volumes for project roottestreplicatedtableattach_gw9 are DRIVER VOLUME NAME (cluster.py:871, print_all_docker_pieces) 2026-04-30 17:27:06 [ 509 ] DEBUG : Command:docker container list --all --filter name='^/roottestreplicatedtableattach_gw9_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2026-04-30 17:27:07 [ 509 ] DEBUG : Unstopped containers: {} (cluster.py:890, cleanup) 2026-04-30 17:27:07 [ 509 ] DEBUG : No running containers for project: roottestreplicatedtableattach_gw9 (cluster.py:904, cleanup) 2026-04-30 17:27:07 [ 509 ] DEBUG : Trying to prune unused networks... (cluster.py:910, cleanup) 2026-04-30 17:27:07 [ 509 ] DEBUG : Trying to prune unused images... (cluster.py:926, cleanup) 2026-04-30 17:27:07 [ 509 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2026-04-30 17:27:07 [ 509 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:121, run_and_check) 2026-04-30 17:27:07 [ 509 ] DEBUG : Images pruned (cluster.py:929, cleanup) 2026-04-30 17:27:07 [ 509 ] DEBUG : Trying to prune unused volumes... (cluster.py:935, cleanup) 2026-04-30 17:27:07 [ 509 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2026-04-30 17:27:07 [ 509 ] DEBUG : Stdout:4 (cluster.py:121, run_and_check) ______________ test_database_with_multiple_non_default_schemas_1 _______________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_database_with_multiple_non_default_schemas_1(started_cluster): cursor = pg_manager.get_db_cursor() NUM_TABLES = 5 schema_name = "test_schema" clickhouse_postgres_db = "postgres_database_with_schema" materialized_db = "test_database" publication_tables = "" global insert_counter insert_counter = 0 def insert_into_tables(): global insert_counter clickhouse_postgres_db = "postgres_database_with_schema" for i in range(NUM_TABLES): table_name = f"postgresql_replica_{i}" instance.query( f"INSERT INTO {clickhouse_postgres_db}.{table_name} SELECT number, number from numbers(1000 * {insert_counter}, 1000)" ) insert_counter += 1 def assert_show_tables(expected): result = instance.query("SHOW TABLES FROM test_database") assert result == expected print("assert show tables Ok") def check_all_tables_are_synchronized(): for i in range(NUM_TABLES): print("checking table", i) check_tables_are_synchronized( instance, "postgresql_replica_{}".format(i), schema_name=schema_name, postgres_database=clickhouse_postgres_db, ) print("synchronization Ok") create_postgres_schema(cursor, schema_name) pg_manager.create_clickhouse_postgres_db( database_name=clickhouse_postgres_db, schema_name=schema_name, postgres_database="postgres_database", ) for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) create_postgres_table_with_schema(cursor, schema_name, table_name) if publication_tables != "": publication_tables += ", " publication_tables += schema_name + "." + table_name insert_into_tables() pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, settings=[ f"materialized_postgresql_tables_list = '{publication_tables}'", "materialized_postgresql_tables_list_with_schema=1", ], ) check_all_tables_are_synchronized() assert_show_tables( "test_schema.postgresql_replica_0\ntest_schema.postgresql_replica_1\ntest_schema.postgresql_replica_2\ntest_schema.postgresql_replica_3\ntest_schema.postgresql_replica_4\n" ) > instance.restart_clickhouse() test_postgresql_replica_database_engine_2/test.py:476: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:4055: in restart_clickhouse self.start_clickhouse(stop_start_wait_sec) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = start_wait_sec = 60, retry_start = True, expected_to_fail = False def start_clickhouse( self, start_wait_sec=60, retry_start=True, expected_to_fail=False ): if not self.stay_alive: raise Exception( "ClickHouse can be started again only with stay_alive=True instance" ) start_time = time.time() time_to_sleep = 0.5 while start_time + start_wait_sec >= time.time(): # sometimes after SIGKILL (hard reset) server may refuse to start for some time # for different reasons. pid = self.get_process_pid("clickhouse") if pid is None: logging.debug("No clickhouse process running. Start new one.") self.exec_in_container( ["bash", "-c", "{} --daemon".format(self.clickhouse_start_command)], user=str(os.getuid()), ) if expected_to_fail: self.wait_start_failed(start_wait_sec + start_time - time.time()) return time.sleep(1) continue else: logging.debug("Clickhouse process running.") if expected_to_fail: raise Exception("ClickHouse was expected not to be running.") try: self.wait_start(start_wait_sec + start_time - time.time()) return except Exception as e: logging.warning( f"Current start attempt failed. Will kill {pid} just in case." ) self.exec_in_container( ["bash", "-c", f"kill -9 {pid}"], user="root", nothrow=True ) if not retry_start: raise time.sleep(time_to_sleep) > raise Exception("Cannot start ClickHouse, see additional info in logs") E Exception: Cannot start ClickHouse, see additional info in logs helpers/cluster.py:3992: Exception ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- checking table 0 Checking table test_schema.postgresql_replica_0 exists in test_database Checking table is synchronized: `test_database`.`test_schema.postgresql_replica_0` checking table 1 Checking table test_schema.postgresql_replica_1 exists in test_database Checking table is synchronized: `test_database`.`test_schema.postgresql_replica_1` checking table 2 Checking table test_schema.postgresql_replica_2 exists in test_database Checking table is synchronized: `test_database`.`test_schema.postgresql_replica_2` checking table 3 Checking table test_schema.postgresql_replica_3 exists in test_database Checking table is synchronized: `test_database`.`test_schema.postgresql_replica_3` checking table 4 Checking table test_schema.postgresql_replica_4 exists in test_database Checking table is synchronized: `test_database`.`test_schema.postgresql_replica_4` synchronization Ok assert show tables Ok ------------------------------ Captured log call ------------------------------- 2026-04-30 17:24:13 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_with_schema" on instance (cluster.py:3602, query) 2026-04-30 17:24:13 [ 413 ] DEBUG : Executing query CREATE DATABASE "postgres_database_with_schema" ENGINE = PostgreSQL('172.16.4.2:5432', 'postgres_database', 'postgres', 'mysecretpassword', 'test_schema') on instance (cluster.py:3602, query) 2026-04-30 17:24:15 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database_with_schema.postgresql_replica_0 SELECT number, number from numbers(1000 * 0, 1000) on instance (cluster.py:3602, query) 2026-04-30 17:24:16 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database_with_schema.postgresql_replica_1 SELECT number, number from numbers(1000 * 0, 1000) on instance (cluster.py:3602, query) 2026-04-30 17:24:18 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database_with_schema.postgresql_replica_2 SELECT number, number from numbers(1000 * 0, 1000) on instance (cluster.py:3602, query) 2026-04-30 17:24:20 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database_with_schema.postgresql_replica_3 SELECT number, number from numbers(1000 * 0, 1000) on instance (cluster.py:3602, query) 2026-04-30 17:24:21 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database_with_schema.postgresql_replica_4 SELECT number, number from numbers(1000 * 0, 1000) on instance (cluster.py:3602, query) 2026-04-30 17:24:22 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3602, query) 2026-04-30 17:24:23 [ 413 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.4.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') SETTINGS materialized_postgresql_tables_list = 'test_schema.postgresql_replica_0, test_schema.postgresql_replica_1, test_schema.postgresql_replica_2, test_schema.postgresql_replica_3, test_schema.postgresql_replica_4', materialized_postgresql_tables_list_with_schema=1 on instance (cluster.py:3602, query) 2026-04-30 17:24:25 [ 413 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3602, query) 2026-04-30 17:24:26 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_0' on instance (cluster.py:3602, query) 2026-04-30 17:24:28 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_0' on instance (cluster.py:3602, query) 2026-04-30 17:24:29 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_0' on instance (cluster.py:3602, query) 2026-04-30 17:24:31 [ 413 ] DEBUG : Executing query select * from `postgres_database_with_schema`.`postgresql_replica_0` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:32 [ 413 ] DEBUG : Executing query select * from `test_database`.`test_schema.postgresql_replica_0` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:34 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_1' on instance (cluster.py:3602, query) 2026-04-30 17:24:35 [ 413 ] DEBUG : Executing query select * from `postgres_database_with_schema`.`postgresql_replica_1` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:36 [ 413 ] DEBUG : Executing query select * from `test_database`.`test_schema.postgresql_replica_1` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:38 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_2' on instance (cluster.py:3602, query) 2026-04-30 17:24:41 [ 413 ] DEBUG : Executing query select * from `postgres_database_with_schema`.`postgresql_replica_2` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:42 [ 413 ] DEBUG : Executing query select * from `test_database`.`test_schema.postgresql_replica_2` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:43 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_3' on instance (cluster.py:3602, query) 2026-04-30 17:24:45 [ 413 ] DEBUG : Executing query select * from `postgres_database_with_schema`.`postgresql_replica_3` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:46 [ 413 ] DEBUG : Executing query select * from `test_database`.`test_schema.postgresql_replica_3` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:48 [ 413 ] DEBUG : Executing query SHOW TABLES FROM `test_database` WHERE name = 'test_schema.postgresql_replica_4' on instance (cluster.py:3602, query) 2026-04-30 17:24:50 [ 413 ] DEBUG : Executing query select * from `postgres_database_with_schema`.`postgresql_replica_4` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:52 [ 413 ] DEBUG : Executing query select * from `test_database`.`test_schema.postgresql_replica_4` order by key; on instance (cluster.py:3602, query) 2026-04-30 17:24:55 [ 413 ] DEBUG : Executing query SHOW TABLES FROM test_database on instance (cluster.py:3602, query) 2026-04-30 17:24:56 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:24:56 [ 413 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:24:57 [ 413 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2026-04-30 17:24:57 [ 413 ] DEBUG : Stdout: 789 ? 00:01:30 clickhouse (cluster.py:121, run_and_check) 2026-04-30 17:24:57 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:24:57 [ 413 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', 'pkill clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:24:57 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:24:57 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:24:59 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:00 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:00 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:01 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:02 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:02 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:03 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:04 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:04 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:07 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:08 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:08 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:08 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:09 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:09 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:11 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:12 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:12 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:13 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:14 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:14 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:15 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:16 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:16 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:18 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:19 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:19 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:21 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:22 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:24 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:37 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:38 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:38 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:40 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:41 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:41 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:42 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:43 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:43 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:44 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:45 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:45 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:46 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:46 [ 413 ] DEBUG : Stdout:1743 (cluster.py:121, run_and_check) 2026-04-30 17:25:47 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:47 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:48 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:48 [ 413 ] DEBUG : Stdout:1743 (cluster.py:121, run_and_check) 2026-04-30 17:25:49 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:49 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:50 [ 413 ] DEBUG : Stdout:789 (cluster.py:121, run_and_check) 2026-04-30 17:25:50 [ 413 ] DEBUG : Stdout:1743 (cluster.py:121, run_and_check) 2026-04-30 17:25:51 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:51 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:52 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:52 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:52 [ 413 ] DEBUG : No clickhouse process running. Start new one. (cluster.py:3964, start_clickhouse) 2026-04-30 17:25:52 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:52 [ 413 ] DEBUG : Command:['docker', 'exec', '-u', '0', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:113, run_and_check) 2026-04-30 17:25:55 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:55 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:56 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:25:56 [ 413 ] DEBUG : Clickhouse process running. (cluster.py:3975, start_clickhouse) 2026-04-30 17:25:56 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:25:56 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:25:57 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:25:57 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:25:59 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:00 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:02 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:03 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:05 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:06 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:08 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:09 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:11 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:14 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:14 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:17 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:26:17 [ 413 ] WARNING : ERROR Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) (cluster.py:4008, wait_start) 2026-04-30 17:26:17 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:17 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:19 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:26:19 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:20 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:22 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:23 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:24 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:25 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:26 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:31 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:32 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:34 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:35 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:35 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:37 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:26:37 [ 413 ] WARNING : ERROR Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) (cluster.py:4008, wait_start) 2026-04-30 17:26:37 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:37 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:26:40 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:26:40 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:42 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:44 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:46 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:47 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:50 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:51 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:53 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:55 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:56 [ 413 ] DEBUG : Executing query select 20 on instance (cluster.py:3602, query) 2026-04-30 17:26:58 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:26:58 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:27:00 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:27:00 [ 413 ] WARNING : ERROR Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) (cluster.py:4008, wait_start) 2026-04-30 17:27:00 [ 413 ] ERROR : No time left to start. But process is still running. Will dump threads. (cluster.py:4013, wait_start) 2026-04-30 17:27:00 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:27:00 [ 413 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:27:02 [ 413 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2026-04-30 17:27:02 [ 413 ] DEBUG : Stdout: 1808 ? 00:00:25 clickhouse (cluster.py:121, run_and_check) 2026-04-30 17:27:02 [ 413 ] INFO : PS RESULT: PID TTY TIME CMD 1808 ? 00:00:25 clickhouse (cluster.py:4019, wait_start) 2026-04-30 17:27:02 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:27:02 [ 413 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:27:04 [ 413 ] DEBUG : Stdout:1808 (cluster.py:121, run_and_check) 2026-04-30 17:27:04 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 1808"] (cluster.py:2173, exec_in_container) 2026-04-30 17:27:04 [ 413 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 1808"] (cluster.py:113, run_and_check) 2026-04-30 17:32:04 [ 413 ] WARNING : Current start attempt failed. Will kill 1808 just in case. (cluster.py:3982, start_clickhouse) 2026-04-30 17:32:04 [ 413 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine2_gw2_instance_1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 1808'] (cluster.py:2173, exec_in_container) 2026-04-30 17:32:04 [ 413 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine2_gw2_instance_1', 'bash', '-c', 'kill -9 1808'] (cluster.py:113, run_and_check) ______________ test_database_with_multiple_non_default_schemas_2 _______________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_database_with_multiple_non_default_schemas_2(started_cluster): cursor = pg_manager.get_db_cursor() NUM_TABLES = 2 schemas_num = 2 schema_list = "schema0, schema1" materialized_db = "test_database" global insert_counter insert_counter = 0 def check_all_tables_are_synchronized(): for i in range(schemas_num): schema_name = f"schema{i}" clickhouse_postgres_db = f"clickhouse_postgres_db{i}" for ti in range(NUM_TABLES): table_name = f"postgresql_replica_{ti}" print(f"checking table {schema_name}.{table_name}") check_tables_are_synchronized( instance, f"{table_name}", schema_name=schema_name, postgres_database=clickhouse_postgres_db, ) print("synchronized Ok") def insert_into_tables(): global insert_counter for i in range(schemas_num): clickhouse_postgres_db = f"clickhouse_postgres_db{i}" for ti in range(NUM_TABLES): table_name = f"postgresql_replica_{ti}" instance.query( f"INSERT INTO {clickhouse_postgres_db}.{table_name} SELECT number, number from numbers(1000 * {insert_counter}, 1000)" ) insert_counter += 1 def assert_show_tables(expected): result = instance.query("SHOW TABLES FROM test_database") assert result == expected print("assert show tables Ok") for i in range(schemas_num): schema_name = f"schema{i}" clickhouse_postgres_db = f"clickhouse_postgres_db{i}" create_postgres_schema(cursor, schema_name) > pg_manager.create_clickhouse_postgres_db( database_name=clickhouse_postgres_db, schema_name=schema_name, postgres_database="postgres_database", ) test_postgresql_replica_database_engine_2/test.py:563: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:16 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "clickhouse_postgres_db0" on instance (cluster.py:3602, query) _________________ test_database_with_single_non_default_schema _________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_database_with_single_non_default_schema(started_cluster): cursor = pg_manager.get_db_cursor() NUM_TABLES = 5 schema_name = "test_schema" materialized_db = "test_database" clickhouse_postgres_db = "postgres_database_with_schema" global insert_counter insert_counter = 0 def insert_into_tables(): global insert_counter clickhouse_postgres_db = "postgres_database_with_schema" for i in range(NUM_TABLES): table_name = f"postgresql_replica_{i}" instance.query( f"INSERT INTO {clickhouse_postgres_db}.{table_name} SELECT number, number from numbers(1000 * {insert_counter}, 1000)" ) insert_counter += 1 def assert_show_tables(expected): result = instance.query("SHOW TABLES FROM test_database") assert result == expected print("assert show tables Ok") def check_all_tables_are_synchronized(): for i in range(NUM_TABLES): print("checking table", i) check_tables_are_synchronized( instance, f"postgresql_replica_{i}", postgres_database=clickhouse_postgres_db, ) print("synchronization Ok") create_postgres_schema(cursor, schema_name) > pg_manager.create_clickhouse_postgres_db( database_name=clickhouse_postgres_db, schema_name=schema_name, postgres_database="postgres_database", ) test_postgresql_replica_database_engine_2/test.py:347: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:20 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_with_schema" on instance (cluster.py:3602, query) _____________________________ test_default_columns _____________________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_default_columns(started_cluster): table = "test_default_columns" pg_manager.create_postgres_table( table, "", f"""CREATE TABLE {table} ( key integer PRIMARY KEY, x integer, y text DEFAULT 'y1', z integer, a text DEFAULT 'a1', b integer); """, ) pg_manager.execute(f"insert into {table} (key, x, z, b) values (1,1,1,1);") pg_manager.execute(f"insert into {table} (key, x, z, b) values (2,2,2,2);") > pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, settings=[ f"materialized_postgresql_tables_list = '{table}'", "materialized_postgresql_backoff_min_ms = 100", "materialized_postgresql_backoff_max_ms = 100", ], ) test_postgresql_replica_database_engine_2/test.py:1053: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:249: in create_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}`") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE test_default_columns ( key integer PRIMARY KEY, x integer, y text DEFAULT 'y1', z integer, a text DEFAULT 'a1', b integer); ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:23 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3602, query) ____________________________ test_dependent_loading ____________________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_dependent_loading(started_cluster): table = "test_dependent_loading" pg_manager.create_postgres_table(table) > instance.query( f"INSERT INTO postgres_database.{table} SELECT number, number from numbers(0, 50)" ) test_postgresql_replica_database_engine_2/test.py:1086: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "test_dependent_loading" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:28 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database.test_dependent_loading SELECT number, number from numbers(0, 50) on instance (cluster.py:3602, query) ________________________ test_failed_load_from_snapshot ________________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_failed_load_from_snapshot(started_cluster): > if instance.is_built_with_sanitizer() or instance.is_debug_build(): test_postgresql_replica_database_engine_2/test.py:899: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3563: in is_built_with_sanitizer build_opts = self.query( helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:33 [ 413 ] DEBUG : Executing query SELECT value FROM system.build_options WHERE name = 'CXX_FLAGS' on instance (cluster.py:3602, query) ____________________________ test_generated_columns ____________________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_generated_columns(started_cluster): table = "test_generated_columns" pg_manager.create_postgres_table( table, "", f"""CREATE TABLE {table} ( key integer PRIMARY KEY, x integer DEFAULT 0, temp integer DEFAULT 0, y integer GENERATED ALWAYS AS (x*2) STORED, z text DEFAULT 'z'); """, ) pg_manager.execute(f"alter table {table} drop column temp;") pg_manager.execute(f"insert into {table} (key, x, z) values (1,1,'1');") pg_manager.execute(f"insert into {table} (key, x, z) values (2,2,'2');") > pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, settings=[ f"materialized_postgresql_tables_list = '{table}'", "materialized_postgresql_backoff_min_ms = 100", "materialized_postgresql_backoff_max_ms = 100", ], ) test_postgresql_replica_database_engine_2/test.py:967: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:249: in create_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}`") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE test_generated_columns ( key integer PRIMARY KEY, x integer DEFAULT 0, temp integer DEFAULT 0, y integer GENERATED ALWAYS AS (x*2) STORED, z text DEFAULT 'z'); ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:38 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3602, query) _____________________ test_generated_columns_with_sequence _____________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_generated_columns_with_sequence(started_cluster): table = "test_generated_columns_with_sequence" pg_manager.create_postgres_table( table, "", f"""CREATE TABLE {table} ( key integer PRIMARY KEY, x integer, y integer GENERATED ALWAYS AS (x*2) STORED, z text); """, ) pg_manager.execute( f"create sequence {table}_id_seq increment by 1 minvalue 1 start 1;" ) pg_manager.execute( f"alter table {table} alter key set default nextval('{table}_id_seq');" ) pg_manager.execute(f"insert into {table} (key, x, z) values (1,1,'1');") pg_manager.execute(f"insert into {table} (key, x, z) values (2,2,'2');") > pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, settings=[ f"materialized_postgresql_tables_list = '{table}'", "materialized_postgresql_backoff_min_ms = 100", "materialized_postgresql_backoff_max_ms = 100", ], ) test_postgresql_replica_database_engine_2/test.py:1019: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:249: in create_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}`") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE test_generated_columns_with_sequence ( key integer PRIMARY KEY, x integer, y integer GENERATED ALWAYS AS (x*2) STORED, z text); ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:42 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3602, query) ____________________________ test_materialized_view ____________________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_materialized_view(started_cluster): pg_manager.execute(f"DROP TABLE IF EXISTS test_table") pg_manager.execute( f"CREATE TABLE test_table (key integer PRIMARY KEY, value integer)" ) pg_manager.execute(f"INSERT INTO test_table SELECT 1, 2") > instance.query("DROP DATABASE IF EXISTS test_database") test_postgresql_replica_database_engine_2/test.py:713: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:45 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS test_database on instance (cluster.py:3602, query) ___________________ test_predefined_connection_configuration ___________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_predefined_connection_configuration(started_cluster): pg_manager.execute(f"DROP TABLE IF EXISTS test_table") pg_manager.execute( f"CREATE TABLE test_table (key integer PRIMARY KEY, value integer)" ) pg_manager.execute(f"INSERT INTO test_table SELECT 1, 2") > instance.query( "CREATE DATABASE test_database ENGINE = MaterializedPostgreSQL(postgres1) SETTINGS materialized_postgresql_tables_list='test_table'" ) test_postgresql_replica_database_engine_2/test.py:302: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:49 [ 413 ] DEBUG : Executing query CREATE DATABASE test_database ENGINE = MaterializedPostgreSQL(postgres1) SETTINGS materialized_postgresql_tables_list='test_table' on instance (cluster.py:3602, query) ___________________________ test_quoting_publication ___________________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_quoting_publication(started_cluster): postgres_database = "postgres-postgres" pg_manager3 = PostgresManager() > pg_manager3.init( instance, cluster.postgres_ip, cluster.postgres_port, default_database=postgres_database, ) test_postgresql_replica_database_engine_2/test.py:1147: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:134: in init self.prepare() helpers/postgres_utility.py:163: in prepare self.create_clickhouse_postgres_db() helpers/postgres_utility.py:214: in create_clickhouse_postgres_db self.drop_clickhouse_postgres_db(database_name) helpers/postgres_utility.py:232: in drop_clickhouse_postgres_db self.instance.query(f'DROP DATABASE IF EXISTS "{database_name}"') helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ------------------------------ Captured log call ------------------------------- 2026-04-30 17:32:56 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres-postgres" on instance (cluster.py:3602, query) ______________________ test_remove_table_from_replication ______________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_remove_table_from_replication(started_cluster): NUM_TABLES = 5 > pg_manager.create_and_fill_postgres_tables(NUM_TABLES, 10000) test_postgresql_replica_database_engine_2/test.py:217: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:309: in create_and_fill_postgres_tables self.instance.query( helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:33:03 [ 413 ] DEBUG : Executing query INSERT INTO `postgres_database`.postgresql_replica_0 SELECT number, number from numbers(10000) on instance (cluster.py:3602, query) ____________________________ test_replica_consumer _____________________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_replica_consumer(started_cluster): table = "test_replica_consumer" pg_manager_instance2.restart() pg_manager.create_postgres_table(table) > instance.query( f"INSERT INTO postgres_database.{table} SELECT number, number from numbers(0, 50)" ) test_postgresql_replica_database_engine_2/test.py:824: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "test_replica_consumer" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:33:09 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance2 (cluster.py:3602, query) 2026-04-30 17:33:10 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance2 (cluster.py:3602, query) 2026-04-30 17:33:12 [ 413 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.4.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance2 (cluster.py:3602, query) 2026-04-30 17:33:16 [ 413 ] DEBUG : Executing query INSERT INTO postgres_database.test_replica_consumer SELECT number, number from numbers(0, 50) on instance (cluster.py:3602, query) _______________________ test_symbols_in_publication_name _______________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_symbols_in_publication_name(started_cluster): table = "test_symbols_in_publication_name" pg_manager3.create_postgres_table(table) > instance.query( f"INSERT INTO `{pg_manager3.get_default_database()}`.`{table}` SELECT number, number from numbers(0, 50)" ) test_postgresql_replica_database_engine_2/test.py:930: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "test_symbols_in_publication_name" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:33:21 [ 413 ] DEBUG : Executing query INSERT INTO `postgres-postgres`.`test_symbols_in_publication_name` SELECT number, number from numbers(0, 50) on instance (cluster.py:3602, query) _____________________________ test_table_override ______________________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_table_override(started_cluster): table_name = "table_override" materialized_database = "test_database" pg_manager.create_postgres_table(table_name, template=postgres_table_template_6) > instance.query( f"insert into postgres_database.{table_name} select number, 'test' from numbers(10)" ) test_postgresql_replica_database_engine_2/test.py:636: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "table_override" ( key Integer NOT NULL, value Text, PRIMARY KEY(key)) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:33:27 [ 413 ] DEBUG : Executing query insert into postgres_database.table_override select number, 'test' from numbers(10) on instance (cluster.py:3602, query) __________________________________ test_toast __________________________________ [gw2] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_toast(started_cluster): table = "test_toast" pg_manager.create_postgres_table( table, "", """CREATE TABLE "{}" (id integer PRIMARY KEY, txt text, other text)""", ) > pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, settings=[ f"materialized_postgresql_tables_list = '{table}'", "materialized_postgresql_backoff_min_ms = 100", "materialized_postgresql_backoff_max_ms = 100", ], ) test_postgresql_replica_database_engine_2/test.py:794: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:249: in create_materialized_db self.instance.query(f"DROP DATABASE IF EXISTS `{materialized_database}`") helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.4.3:9000). (NETWORK_ERROR) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE "test_toast" (id integer PRIMARY KEY, txt text, other text) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:33:32 [ 413 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3602, query) ______________ test_rename_distributed_parallel_insert_and_select ______________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_rename_distributed_parallel_insert_and_select(started_cluster): table_name = "test_rename_distributed_parallel_insert_and_select" try: create_distributed_table(node1, table_name) insert(node1, table_name, 1000) p = Pool(15) tasks = [] for i in range(1): tasks.append( p.apply_async( rename_column_on_cluster, (node1, table_name, "num2", "foo2", 3, True), ) ) tasks.append( p.apply_async( rename_column_on_cluster, (node1, "%s_replicated" % table_name, "num2", "foo2", 3, True), ) ) tasks.append( p.apply_async( rename_column_on_cluster, (node1, table_name, "foo2", "foo3", 3, True), ) ) tasks.append( p.apply_async( rename_column_on_cluster, (node1, "%s_replicated" % table_name, "foo2", "foo3", 3, True), ) ) tasks.append( p.apply_async( rename_column_on_cluster, (node1, table_name, "foo3", "num2", 3, True), ) ) tasks.append( p.apply_async( rename_column_on_cluster, (node1, "%s_replicated" % table_name, "foo3", "num2", 3, True), ) ) tasks.append( p.apply_async(insert, (node1, table_name, 10, ["num", "foo3"], 5, True)) ) tasks.append( p.apply_async(insert, (node2, table_name, 10, ["num", "num2"], 5, True)) ) tasks.append( p.apply_async(insert, (node3, table_name, 10, ["num", "foo2"], 5, True)) ) tasks.append( p.apply_async(select, (node1, table_name, "foo2", None, 5, True)) ) tasks.append( p.apply_async(select, (node2, table_name, "foo3", None, 5, True)) ) tasks.append( p.apply_async(select, (node3, table_name, "num2", None, 5, True)) ) for task in tasks: > task.get(timeout=240) test_rename_column/test.py:785: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = timeout = 240 def get(self, timeout=None): self.wait(timeout) if not self.ready(): > raise TimeoutError E multiprocessing.context.TimeoutError /usr/lib/python3.10/multiprocessing/pool.py:770: TimeoutError During handling of the above exception, another exception occurred: started_cluster = def test_rename_distributed_parallel_insert_and_select(started_cluster): table_name = "test_rename_distributed_parallel_insert_and_select" try: create_distributed_table(node1, table_name) insert(node1, table_name, 1000) p = Pool(15) tasks = [] for i in range(1): tasks.append( p.apply_async( rename_column_on_cluster, (node1, table_name, "num2", "foo2", 3, True), ) ) tasks.append( p.apply_async( rename_column_on_cluster, (node1, "%s_replicated" % table_name, "num2", "foo2", 3, True), ) ) tasks.append( p.apply_async( rename_column_on_cluster, (node1, table_name, "foo2", "foo3", 3, True), ) ) tasks.append( p.apply_async( rename_column_on_cluster, (node1, "%s_replicated" % table_name, "foo2", "foo3", 3, True), ) ) tasks.append( p.apply_async( rename_column_on_cluster, (node1, table_name, "foo3", "num2", 3, True), ) ) tasks.append( p.apply_async( rename_column_on_cluster, (node1, "%s_replicated" % table_name, "foo3", "num2", 3, True), ) ) tasks.append( p.apply_async(insert, (node1, table_name, 10, ["num", "foo3"], 5, True)) ) tasks.append( p.apply_async(insert, (node2, table_name, 10, ["num", "num2"], 5, True)) ) tasks.append( p.apply_async(insert, (node3, table_name, 10, ["num", "foo2"], 5, True)) ) tasks.append( p.apply_async(select, (node1, table_name, "foo2", None, 5, True)) ) tasks.append( p.apply_async(select, (node2, table_name, "foo3", None, 5, True)) ) tasks.append( p.apply_async(select, (node3, table_name, "num2", None, 5, True)) ) for task in tasks: task.get(timeout=240) rename_column_on_cluster(node1, table_name, "foo2", "num2", 1, True) rename_column_on_cluster( node1, "%s_replicated" % table_name, "foo2", "num2", 1, True ) rename_column_on_cluster(node1, table_name, "foo3", "num2", 1, True) rename_column_on_cluster( node1, "%s_replicated" % table_name, "foo3", "num2", 1, True ) insert(node1, table_name, 1000, col_names=["num", "num2"]) select(node1, table_name, "num2") select(node2, table_name, "num2") select(node3, table_name, "num2") select(node4, table_name, "num2") finally: > drop_distributed_table(node1, table_name) test_rename_column/test.py:802: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_rename_column/test.py:121: in drop_distributed_table node.query( helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 159, stderr: Received exception from server (version 24.8.14): E Code: 159. DB::Exception: Received from 172.16.8.6:9000. DB::Exception: Distributed DDL task /clickhouse/task_queue/ddl/query-0000000022 is not finished on 3 of 4 hosts (0 of them are currently executing the task, 0 are inactive). They are going to execute the query in background. Was waiting for 181.45623788 seconds, which is longer than distributed_ddl_task_timeout. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x00000000343c5254 E 1. ./build_docker/./src/Common/Exception.cpp:111: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001adb62c9 E 2. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000aa94445 E 3. ./src/Common/LoggingFormatStringHelpers.h:45: DB::DDLQueryStatusSource::generate() @ 0x0000000029b6e5a9 E 4. ./src/Processors/Chunk.h:110: DB::ISource::tryGenerate() @ 0x000000002d398878 E 5. ./build_docker/./src/Processors/ISource.cpp:0: DB::ISource::work() @ 0x000000002d397d01 E 6. ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:0: DB::ExecutionThreadContext::executeTask() @ 0x000000002d3d1c4e E 7. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:273: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic*) @ 0x000000002d3b8a31 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: DB::PipelineExecutor::executeImpl(unsigned long, bool) @ 0x000000002d3b73dc E 9. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:274: DB::PipelineExecutor::execute(unsigned long, bool) @ 0x000000002d3b6edb E 10. ./build_docker/./src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:94: void std::__function::__policy_invoker::__call_impl::ThreadFromGlobalPoolImpl(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000002d3da957 E 11. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000001af7fcb2 E 12. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: void* std::__thread_proxy[abi:v15007]>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*>>(void*) @ 0x000000001af8c0b5 E 13. asan_thread_start(void*) @ 0x000000000aa49059 E 14. ? @ 0x00007ffa1cc6cac3 E 15. ? @ 0x00007ffa1ccfe850 E . (TIMEOUT_EXCEEDED) E (query: DROP TABLE IF EXISTS test_rename_distributed_parallel_insert_and_select ON CLUSTER test_cluster SYNC) helpers/client.py:239: QueryRuntimeException ------------------------------ Captured log call ------------------------------- 2026-04-30 17:24:29 [ 410 ] DEBUG : Executing query CREATE TABLE test_rename_distributed_parallel_insert_and_select_replicated ON CLUSTER test_cluster ( num UInt32, num2 UInt32 DEFAULT num + 1 ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/{shard}/test_rename_distributed_parallel_insert_and_select_replicated', '{replica}') ORDER BY num PARTITION BY num % 100; on node1 (cluster.py:3602, query) 2026-04-30 17:24:33 [ 410 ] DEBUG : Executing query CREATE TABLE test_rename_distributed_parallel_insert_and_select ON CLUSTER test_cluster AS test_rename_distributed_parallel_insert_and_select_replicated ENGINE = Distributed(test_cluster, default, test_rename_distributed_parallel_insert_and_select_replicated, rand()) on node1 (cluster.py:3602, query) 2026-04-30 17:24:37 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,num2) SELECT number + 0 AS num, number + 1 + 0 AS num2 FROM numbers_mt(1000) on node1 (cluster.py:3602, query) 2026-04-30 17:26:26 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select_replicated ON CLUSTER test_cluster RENAME COLUMN num2 to foo2 on node1 (cluster.py:3602, query) 2026-04-30 17:26:26 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select ON CLUSTER test_cluster RENAME COLUMN foo2 to foo3 on node1 (cluster.py:3602, query) 2026-04-30 17:26:26 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select_replicated ON CLUSTER test_cluster RENAME COLUMN foo2 to foo3 on node1 (cluster.py:3602, query) 2026-04-30 17:26:26 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select ON CLUSTER test_cluster RENAME COLUMN foo3 to num2 on node1 (cluster.py:3602, query) 2026-04-30 17:26:26 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select_replicated ON CLUSTER test_cluster RENAME COLUMN foo3 to num2 on node1 (cluster.py:3602, query) 2026-04-30 17:26:26 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,foo3) SELECT number + 0 AS num, number + 1 + 0 AS foo3 FROM numbers_mt(10) on node1 (cluster.py:3602, query) 2026-04-30 17:26:26 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE foo2 % 1000 > 0 on node1 (cluster.py:3602, query) 2026-04-30 17:26:26 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,num2) SELECT number + 0 AS num, number + 1 + 0 AS num2 FROM numbers_mt(10) on node2 (cluster.py:3602, query) 2026-04-30 17:26:26 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,foo2) SELECT number + 0 AS num, number + 1 + 0 AS foo2 FROM numbers_mt(10) on node3 (cluster.py:3602, query) 2026-04-30 17:26:26 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select ON CLUSTER test_cluster RENAME COLUMN num2 to foo2 on node1 (cluster.py:3602, query) 2026-04-30 17:26:26 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE foo3 % 1000 > 0 on node2 (cluster.py:3602, query) 2026-04-30 17:26:26 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE num2 % 1000 > 0 on node3 (cluster.py:3602, query) 2026-04-30 17:26:38 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,foo3) SELECT number + 0 AS num, number + 1 + 0 AS foo3 FROM numbers_mt(10) on node1 (cluster.py:3602, query) 2026-04-30 17:26:38 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE foo3 % 1000 > 0 on node2 (cluster.py:3602, query) 2026-04-30 17:26:39 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE foo3 % 1000 > 0 on node2 (cluster.py:3602, query) 2026-04-30 17:26:42 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,foo2) SELECT number + 0 AS num, number + 1 + 0 AS foo2 FROM numbers_mt(10) on node3 (cluster.py:3602, query) 2026-04-30 17:26:43 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE foo3 % 1000 > 0 on node2 (cluster.py:3602, query) 2026-04-30 17:26:43 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE foo2 % 1000 > 0 on node1 (cluster.py:3602, query) 2026-04-30 17:26:44 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE foo3 % 1000 > 0 on node2 (cluster.py:3602, query) 2026-04-30 17:26:45 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select_replicated ON CLUSTER test_cluster RENAME COLUMN foo2 to foo3 on node1 (cluster.py:3602, query) 2026-04-30 17:26:47 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,foo2) SELECT number + 0 AS num, number + 1 + 0 AS foo2 FROM numbers_mt(10) on node3 (cluster.py:3602, query) 2026-04-30 17:26:49 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE foo2 % 1000 > 0 on node1 (cluster.py:3602, query) 2026-04-30 17:26:52 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,foo3) SELECT number + 0 AS num, number + 1 + 0 AS foo3 FROM numbers_mt(10) on node1 (cluster.py:3602, query) 2026-04-30 17:26:52 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE foo2 % 1000 > 0 on node1 (cluster.py:3602, query) 2026-04-30 17:26:53 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,foo2) SELECT number + 0 AS num, number + 1 + 0 AS foo2 FROM numbers_mt(10) on node3 (cluster.py:3602, query) 2026-04-30 17:26:54 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,foo3) SELECT number + 0 AS num, number + 1 + 0 AS foo3 FROM numbers_mt(10) on node1 (cluster.py:3602, query) 2026-04-30 17:26:55 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,foo2) SELECT number + 0 AS num, number + 1 + 0 AS foo2 FROM numbers_mt(10) on node3 (cluster.py:3602, query) 2026-04-30 17:26:56 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,foo3) SELECT number + 0 AS num, number + 1 + 0 AS foo3 FROM numbers_mt(10) on node1 (cluster.py:3602, query) 2026-04-30 17:26:56 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE foo2 % 1000 > 0 on node1 (cluster.py:3602, query) 2026-04-30 17:27:14 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE num2 % 1000 > 0 on node3 (cluster.py:3602, query) 2026-04-30 17:27:19 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,num2) SELECT number + 0 AS num, number + 1 + 0 AS num2 FROM numbers_mt(10) on node2 (cluster.py:3602, query) 2026-04-30 17:27:27 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,num2) SELECT number + 0 AS num, number + 1 + 0 AS num2 FROM numbers_mt(10) on node2 (cluster.py:3602, query) 2026-04-30 17:27:29 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE num2 % 1000 > 0 on node3 (cluster.py:3602, query) 2026-04-30 17:27:32 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,num2) SELECT number + 0 AS num, number + 1 + 0 AS num2 FROM numbers_mt(10) on node2 (cluster.py:3602, query) 2026-04-30 17:27:35 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE num2 % 1000 > 0 on node3 (cluster.py:3602, query) 2026-04-30 17:27:36 [ 410 ] DEBUG : Executing query SET max_partitions_per_insert_block = 10000000; INSERT INTO test_rename_distributed_parallel_insert_and_select (num,num2) SELECT number + 0 AS num, number + 1 + 0 AS num2 FROM numbers_mt(10) on node2 (cluster.py:3602, query) 2026-04-30 17:27:38 [ 410 ] DEBUG : Executing query SELECT count() FROM test_rename_distributed_parallel_insert_and_select WHERE num2 % 1000 > 0 on node3 (cluster.py:3602, query) 2026-04-30 17:29:45 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select_replicated ON CLUSTER test_cluster RENAME COLUMN foo3 to num2 on node1 (cluster.py:3602, query) 2026-04-30 17:29:45 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select_replicated ON CLUSTER test_cluster RENAME COLUMN num2 to foo2 on node1 (cluster.py:3602, query) 2026-04-30 17:29:45 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select ON CLUSTER test_cluster RENAME COLUMN num2 to foo2 on node1 (cluster.py:3602, query) 2026-04-30 17:29:45 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select ON CLUSTER test_cluster RENAME COLUMN foo3 to num2 on node1 (cluster.py:3602, query) 2026-04-30 17:29:45 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select ON CLUSTER test_cluster RENAME COLUMN foo2 to foo3 on node1 (cluster.py:3602, query) 2026-04-30 17:30:03 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select_replicated ON CLUSTER test_cluster RENAME COLUMN foo2 to foo3 on node1 (cluster.py:3602, query) 2026-04-30 17:30:26 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_distributed_parallel_insert_and_select ON CLUSTER test_cluster SYNC on node1 (cluster.py:3602, query) 2026-04-30 17:32:22 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select ON CLUSTER test_cluster RENAME COLUMN foo3 to num2 on node1 (cluster.py:3602, query) 2026-04-30 17:32:26 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select_replicated ON CLUSTER test_cluster RENAME COLUMN num2 to foo2 on node1 (cluster.py:3602, query) 2026-04-30 17:32:59 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select_replicated ON CLUSTER test_cluster RENAME COLUMN foo3 to num2 on node1 (cluster.py:3602, query) 2026-04-30 17:32:59 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select ON CLUSTER test_cluster RENAME COLUMN foo2 to foo3 on node1 (cluster.py:3602, query) 2026-04-30 17:33:06 [ 410 ] DEBUG : Executing query ALTER TABLE test_rename_distributed_parallel_insert_and_select ON CLUSTER test_cluster RENAME COLUMN num2 to foo2 on node1 (cluster.py:3602, query) _____________________________ test_rename_parallel _____________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_rename_parallel(started_cluster): table_name = "test_rename_parallel" drop_table(nodes, table_name) try: > create_table(nodes, table_name) test_rename_column/test.py:336: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_rename_column/test.py:91: in create_table node.query( helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 231, stderr: Received exception from server (version 24.8.14): E Code: 999. DB::Exception: Received from 172.16.8.6:9000. Coordination::Exception. Coordination::Exception: Coordination error: Connection loss, path /clickhouse/tables/test/test_rename_parallel. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x00000000343c5254 E 1. ./build_docker/./src/Common/Exception.cpp:111: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001adb62c9 E 2. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000aa94445 E 3. ./src/Common/LoggingFormatStringHelpers.h:45: Coordination::Exception::Exception(Coordination::Error, FormatStringHelperImpl::type, std::type_identity::type>, char const*&&, String const&) @ 0x00000000264d5ab9 E 4. ./src/Common/ZooKeeper/IKeeper.h:501: Coordination::Exception::fromPath(Coordination::Error, String const&) @ 0x00000000264d3ea3 E 5. ./build_docker/./src/Common/ZooKeeper/ZooKeeper.cpp:0: zkutil::ZooKeeper::createAncestors(String const&) @ 0x000000002e65c63d E 6. ./build_docker/./src/Storages/StorageReplicatedMergeTree.cpp:0: DB::StorageReplicatedMergeTree::createTableIfNotExists(std::shared_ptr const&) @ 0x000000002ba809d4 E 7. ./build_docker/./src/Storages/StorageReplicatedMergeTree.cpp:0: DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(String const&, String const&, DB::LoadingStrictnessLevel, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>, DB::RenamingRestrictions, bool) @ 0x000000002ba7b20b E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:1460: std::shared_ptr std::allocate_shared[abi:v15007], String&, String&, DB::LoadingStrictnessLevel const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, DB::RenamingRestrictions&, bool&, void>(std::allocator const&, String&, String&, DB::LoadingStrictnessLevel const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&, DB::RenamingRestrictions&, bool&) @ 0x000000002cc14a9b E 9. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:962: DB::create(DB::StorageFactory::Arguments const&) @ 0x000000002cc0b45f E 10. ./build_docker/./src/Storages/StorageFactory.cpp:225: DB::StorageFactory::get(DB::ASTCreateQuery const&, String const&, std::shared_ptr, std::shared_ptr, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, DB::LoadingStrictnessLevel) const @ 0x000000002b7de4d3 E 11. ./build_docker/./src/Interpreters/InterpreterCreateQuery.cpp:1718: DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&, std::unique_ptr>&, DB::LoadingStrictnessLevel) @ 0x0000000029214594 E 12. ./build_docker/./src/Interpreters/InterpreterCreateQuery.cpp:0: DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x0000000029207f5c E 13. ./build_docker/./src/Interpreters/InterpreterCreateQuery.cpp:2045: DB::InterpreterCreateQuery::execute() @ 0x000000002921e755 E 14. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000029b9507c E 15. ./build_docker/./src/Interpreters/executeQuery.cpp:1397: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000029b8e405 E 16. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000002d1e4cec E 17. ./build_docker/./src/Server/TCPHandler.cpp:2527: DB::TCPHandler::run() @ 0x000000002d218c00 E 18. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x00000000345a29ef E 19. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x00000000345a35d7 E 20. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:219: Poco::PooledThread::run() @ 0x00000000344a5ceb E 21. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003449fe48 E 22. asan_thread_start(void*) @ 0x000000000aa49059 E 23. ? @ 0x00007ffa1cc6cac3 E 24. ? @ 0x00007ffa1ccfe850 E . (KEEPER_EXCEPTION) E (query: CREATE TABLE test_rename_parallel E ( E num UInt32, E num2 UInt32 DEFAULT num + 1 E ) E ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/test_rename_parallel', 'node1') E ORDER BY num PARTITION BY num % 100 E ) helpers/client.py:239: QueryRuntimeException ------------------------------ Captured log call ------------------------------- 2026-04-30 17:33:39 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel SYNC on node1 (cluster.py:3602, query) 2026-04-30 17:33:41 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel SYNC on node2 (cluster.py:3602, query) 2026-04-30 17:33:45 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel SYNC on node3 (cluster.py:3602, query) 2026-04-30 17:33:49 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel SYNC on node4 (cluster.py:3602, query) 2026-04-30 17:33:52 [ 410 ] DEBUG : Executing query CREATE TABLE test_rename_parallel ( num UInt32, num2 UInt32 DEFAULT num + 1 ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/test_rename_parallel', 'node1') ORDER BY num PARTITION BY num % 100 on node1 (cluster.py:3602, query) 2026-04-30 17:34:21 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel SYNC on node1 (cluster.py:3602, query) 2026-04-30 17:34:24 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel SYNC on node2 (cluster.py:3602, query) 2026-04-30 17:34:30 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel SYNC on node3 (cluster.py:3602, query) 2026-04-30 17:34:32 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel SYNC on node4 (cluster.py:3602, query) ___________ test_polymorphic_parts_basics[first_node0-second_node0] ____________ [gw3] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = first_node = second_node = @pytest.mark.parametrize( ("first_node", "second_node"), [ (node1, node2), # compact parts (node5, node6), # compact parts, old-format ], ) def test_polymorphic_parts_basics(start_cluster, first_node, second_node): first_node.query("SYSTEM STOP MERGES") second_node.query("SYSTEM STOP MERGES") for size in [300, 300, 600]: insert_random_data("polymorphic_table", first_node, size) second_node.query("SYSTEM SYNC REPLICA polymorphic_table", timeout=20) assert first_node.query("SELECT count() FROM polymorphic_table") == "1200\n" assert second_node.query("SELECT count() FROM polymorphic_table") == "1200\n" expected = "Compact\t2\nWide\t1\n" assert TSV( first_node.query( "SELECT part_type, count() FROM system.parts " "WHERE table = 'polymorphic_table' AND active GROUP BY part_type ORDER BY part_type" ) ) == TSV(expected) assert TSV( second_node.query( "SELECT part_type, count() FROM system.parts " "WHERE table = 'polymorphic_table' AND active GROUP BY part_type ORDER BY part_type" ) ) == TSV(expected) first_node.query("SYSTEM START MERGES") second_node.query("SYSTEM START MERGES") for _ in range(40): insert_random_data("polymorphic_table", first_node, 10) insert_random_data("polymorphic_table", second_node, 10) first_node.query("SYSTEM SYNC REPLICA polymorphic_table", timeout=20) second_node.query("SYSTEM SYNC REPLICA polymorphic_table", timeout=20) assert first_node.query("SELECT count() FROM polymorphic_table") == "2000\n" assert second_node.query("SELECT count() FROM polymorphic_table") == "2000\n" first_node.query("OPTIMIZE TABLE polymorphic_table FINAL") second_node.query("SYSTEM SYNC REPLICA polymorphic_table", timeout=20) assert first_node.query("SELECT count() FROM polymorphic_table") == "2000\n" assert second_node.query("SELECT count() FROM polymorphic_table") == "2000\n" assert ( first_node.query( "SELECT DISTINCT part_type FROM system.parts WHERE table = 'polymorphic_table' AND active" ) == "Wide\n" ) assert ( second_node.query( "SELECT DISTINCT part_type FROM system.parts WHERE table = 'polymorphic_table' AND active" ) == "Wide\n" ) # Check alters and mutations also work > first_node.query("ALTER TABLE polymorphic_table ADD COLUMN ss String") test_polymorphic_parts/test.py:254: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 242, stderr: Received exception from server (version 24.8.14): E Code: 242. DB::Exception: Received from 172.16.10.15:9000. DB::Exception: Table is in readonly mode (replica path: /clickhouse/tables/test/shard1/polymorphic_table/replicas/0). Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x00000000343c5254 E 1. ./build_docker/./src/Common/Exception.cpp:111: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001adb62c9 E 2. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000aa94445 E 3. DB::Exception::Exception(int, FormatStringHelperImpl::type>, String const&) @ 0x000000000aac13f4 E 4. ./build_docker/./src/Storages/StorageReplicatedMergeTree.cpp:0: DB::StorageReplicatedMergeTree::assertNotReadonly() const @ 0x000000002ba75de3 E 5. ./build_docker/./src/Storages/StorageReplicatedMergeTree.cpp:0: DB::StorageReplicatedMergeTree::alter(DB::AlterCommands const&, std::shared_ptr, std::unique_lock&) @ 0x000000002bbad413 E 6. ./build_docker/./src/Interpreters/InterpreterAlterQuery.cpp:210: DB::InterpreterAlterQuery::executeToTable(DB::ASTAlterQuery const&) @ 0x00000000291ce726 E 7. ./build_docker/./src/Interpreters/InterpreterAlterQuery.cpp:74: DB::InterpreterAlterQuery::execute() @ 0x00000000291ca8b1 E 8. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000029b9507c E 9. ./build_docker/./src/Interpreters/executeQuery.cpp:1397: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000029b8e405 E 10. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000002d1e4cec E 11. ./build_docker/./src/Server/TCPHandler.cpp:2527: DB::TCPHandler::run() @ 0x000000002d218c00 E 12. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x00000000345a29ef E 13. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x00000000345a35d7 E 14. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:219: Poco::PooledThread::run() @ 0x00000000344a5ceb E 15. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003449fe48 E 16. asan_thread_start(void*) @ 0x000000000aa49059 E 17. ? @ 0x00007f64ca651ac3 E 18. ? @ 0x00007f64ca6e3850 E . (TABLE_IS_READ_ONLY) E (query: ALTER TABLE polymorphic_table ADD COLUMN ss String) helpers/client.py:239: QueryRuntimeException ------------------------------ Captured log call ------------------------------- 2026-04-30 17:25:45 [ 416 ] DEBUG : Executing query SYSTEM STOP MERGES on node1 (cluster.py:3602, query) 2026-04-30 17:25:46 [ 416 ] DEBUG : Executing query SYSTEM STOP MERGES on node2 (cluster.py:3602, query) 2026-04-30 17:25:48 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'95WH1H7LHRKTJXLA19M6304GU1UE21EYE11OXYPR7G9ZAQ6K44MCW7MIDYET9ROX1OZTLWINA8HLVNSQTZLD65P7UTKUV244O75O6RW3OAWW5HMP677MCSBA6TKYYF5C4KLOLBDFTHUP85SZMPSR7OV6SV6FFU64116S3586G6QBSF8CU8K8UH43J42MBT3ILDF4B82Y9JBNBYMRRP7MY1V1WRWFPGEKBUGVOU7TSVEYORGAKUQSSKA9XKNIBL9NSKKI9C1S6',[924, 277, 572, 460, 817, 981, 727, 379, 758, 743, 25, 619, 959, 550, 20, 431, 579, 641, 61, 498, 876, 476, 728, 512, 218, 11, 67, 844, 628, 867, 441, 643, 138, 326, 236, 953, 545, 437, 326, 480, 717, 615, 200, 604, 266, 24, 594, 933, 574, 248, 811, 564, 2, 38, 727, 78, 483, 222, 276, 962, 996, 373, 344, 104, 344, 396, 766, 891, 331, 782, 121, 884, 781, 622, 439, 744, 556, 540, 315, 586, 925, 56, 632, 998, 85, 139, 754, 689, 831, 409, 586, 71, 133, 658, 425, 721, 888, 520, 831, 376, 725, 19, 195, 986, 951, 374, 317, 19, 756, 191, 5, 134, 345, 368, 974, 483, 190, 283, 904, 60, 932, 544, 970, 850, 861, 920, 756, 824, 199, 989, 476, 725, 392, 358, 481, 307, 777, 453, 53, 27 on node1 (cluster.py:3602, query) 2026-04-30 17:25:51 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'M7ICSU2RGZKCFH0129ZUOK1X6OPX4U4C0UHQBV9VCMO3CDF6UQLW267I4M61YYR6GJCX15ADAKHJ9EBZ88J3O7T823GOVNL8FLCYE5JM28QOB54X16EF5EOLLBIUM5FRUHGHL36BQOFTHR6FAIAQDMPR4HDDKTYWBM40IM1STY',[448, 18, 177, 693, 157, 469, 111, 792, 1, 402, 593, 877, 35, 562, 47, 366, 501, 936, 584, 117, 354, 701, 682, 437, 694, 403, 406, 21, 837, 853, 461, 500, 33, 576, 463, 242, 133, 274, 522, 916, 574, 50, 351, 770, 448, 821, 881, 337, 980, 834, 599, 2, 534, 850, 794, 495, 280, 372, 344, 549, 667, 238, 42, 806, 319, 955, 6, 185, 387, 487, 213, 590, 52, 193, 157, 652, 392, 42, 309, 605, 897, 49, 639, 783, 682, 248, 226, 59, 57, 882, 260, 882, 714, 825, 501, 391, 313, 171, 883, 582, 338, 350, 700, 430, 412, 750, 879, 736, 456, 166, 531, 437, 790, 176, 425, 111, 721, 551, 955, 342, 995, 977, 78, 184, 228, 959, 846, 838, 7, 462, 945, 984, 709, 884, 145, 743, 179, 341, 674, 931, 582, 572, 684, 496, 992, 601, 260, 890, 883, 904, 579, 126, 553, 279, 267, 820, 209, 341, 342, on node1 (cluster.py:3602, query) 2026-04-30 17:25:54 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'EUUCENRXSRFIZQZ5D1QJHEDLQ6MP69MKKR1ZPHY8SL0SMS2YA2HDLS2XSY85VTAXCEDEXYNY92LXBXLA0YWGWCGFMK6EDS3WECBGT3UUCYDJO2L9YXG3APSRYU779HEPC7LWA2K2CJB34YMJYPNWGMP4O3VIWVBOCTR1ZD0FPN89R6NU3GA0AK6K4MTXI3TZK2PVYGAERV9OZQS43W22KISTKATCTWEJJHCSDCPVB17IBRSX0LERIWFPHAWVMH3TI0D3E25EBZ4I2E2OPUOOR0LP0BIT2OHOCL4BWONGT4Y2YN82O0W40NVBA71VH75Y54BB5ALJCJA6Z5WZOCO6TWQQJFDJHOFW1RPUQZZDJVC7JX0WDNPAA5Q82ZK0ZOH3E3OLMET79W3467A2XDPWZEY8RYL7YZIFHGWKFJSGNR3XMIYYQ608V3KDZQVNIVALM1M9O2Z45BANAQ9J21ZMZFHPSX08JJS0RWZ8O6HF3KCQFGSMY9WN4XQCS9PQK9GRRMO12ZZWETFEBYMJSC6NX10OM78G94GT2DHYZ6G67VE06R89FGSH905VVUV83K0B162W8EYT3KMI0XOI04RDBS4YHLLJL4ICP09JEZPQFR816Y4MVLEDJBUEMGHTE7VX0H23T55PIREB5GUWSJY1SBR09PVKGIE5LHOK2',[807, 936, 991, 18, 190, 247, 635, 732, 758, 142, 927, 612, 20, 539, 608, 765, 114, 694, 983, 215, 403, 519, 286, 62, 741, 330, 297, 251, 168, 89, 626, 684, 404, 936, 559, 383, 582, 862, 594, 855, 859, 254, 707, 731, 553, 83, 399, 702, 383, 331, 194, 791, 357, 355]), on node1 (cluster.py:3602, query) 2026-04-30 17:25:56 [ 416 ] DEBUG : Executing query SYSTEM SYNC REPLICA polymorphic_table on node2 (cluster.py:3602, query) 2026-04-30 17:25:58 [ 416 ] DEBUG : Executing query SELECT count() FROM polymorphic_table on node1 (cluster.py:3602, query) 2026-04-30 17:25:59 [ 416 ] DEBUG : Executing query SELECT count() FROM polymorphic_table on node2 (cluster.py:3602, query) 2026-04-30 17:26:06 [ 416 ] DEBUG : Executing query SELECT part_type, count() FROM system.parts WHERE table = 'polymorphic_table' AND active GROUP BY part_type ORDER BY part_type on node1 (cluster.py:3602, query) 2026-04-30 17:26:11 [ 416 ] DEBUG : Executing query SELECT part_type, count() FROM system.parts WHERE table = 'polymorphic_table' AND active GROUP BY part_type ORDER BY part_type on node2 (cluster.py:3602, query) 2026-04-30 17:26:15 [ 416 ] DEBUG : Executing query SYSTEM START MERGES on node1 (cluster.py:3602, query) 2026-04-30 17:26:16 [ 416 ] DEBUG : Executing query SYSTEM START MERGES on node2 (cluster.py:3602, query) 2026-04-30 17:26:17 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'EWXRANOLBUQ4LLMLS2517LWYR1GDW0NI9KM1EJA95MJZWZK09HDKFP12EFZGRR63B',[904, 955, 654, 349, 727, 932, 612, 385, 575, 499, 347, 59, 362, 871, 244, 193, 254, 945, 56, 787, 779, 383, 694, 952, 178, 378, 866, 648, 998, 505, 500, 410, 629, 987, 5, 637, 7, 897, 722, 185, 26, 519, 788, 138, 609, 239, 506, 34, 918, 267, 820, 656, 326, 753, 552, 780, 537, 465, 392, 689, 392, 656, 512, 252, 62, 322, 472, 312, 290, 758, 505, 98, 627, 281, 113, 946, 500, 213, 168, 391, 32, 301, 258, 875, 613, 101, 189, 52, 905, 283, 216, 56, 705, 780, 322, 517, 474, 988, 785, 754, 607, 217, 5, 653, 427, 342, 678, 716, 232, 733, 714, 326, 465, 316, 421, 601, 341, 335, 449, 806, 658, 264, 604, 615, 856, 680, 56, 25, 902, 893, 312, 135, 169, 320, 256, 503, 144, 475, 455, 493, 711, 166, 308, 384, 55, 850, 71, 915, 25, 966, 910, 890, 179, 171, 741, 403, 73, 709, 904, 49, 217, 198, 427, 704, 899, 528, 862, 494, 112, 887, 229, 438, 223, 534, 534, 306, 991, 216, 286, 434, on node1 (cluster.py:3602, query) 2026-04-30 17:26:21 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'3YEVOCSRTKI79V6SVOKRKDERKDCQZ5ZXQF9G4S1Q3WXZWQPLXLUAVNU6C8R32167MPVDPRGUHVL9WNG9LMZLCPIY56KZKLX8GY9LPVDQR05DCYNTEA975176B9R10X1KY77F6Z3JRMNMUJ7OEO1W4O1S2J32KXWFUQP2W39ZL5JQEU2UDBQ28Q9I7XMAS8FQGAKMEMZ8US7T68FSDCGLHVXEHWMZAO9VG0WNCJZ8TG7UASNGY88GOXDVQQ0JC10JR0KQL3BLKR6DCD44BO3J1X6AQ4X4H0BX1WWPXO8X6PBCBRHRLM9FWD1XAS9MS20YNNZD1OCR9CMN8AKRZQKUC4V0UUXZH9AP6K2ANH5TKS5TFA8YJ6NI0QAWZ8IE5RP30DRUQ7Y0UTUYBK9E8X3GF21P6AGBI0494EOFQHH54XR19B9ZUT8KN2QTYQWVT98B05F2V5MJI5X2S9JUUGGFFXNCOPB9UUN42TR6VBGQU3M45S9MLQUH2GI5UD3VFNAVLJ4RSQNQ3C4VH5LI5LCK7A4TMJ8QF3R1C5936I7PJDNLN8U0PB4H73ZO7W21K76HQUIY2PB1Q46KUVTBH0FNN4QO8S01O9R68QVURCZUM85SB5PAOAF5JHFGRP0UZO9S8LN4KX8KK7T9O3G7MV4R7SCRTLV5JL0',[340, 602, 965, 990, 285, 447, 591, 392, 663, 225, 412, 774, 955, 133, 718, 222, 872, 200, 831, 102, 375, 228, 501, 29, 944, 3, 455, 70, 111, 198, 880, 221, 156, 236, 357, 746, 984, 535, 788, 825, 483, 267, 823, 529, 543, 837, 978, 831, 149, 828, 179, 843, 238, 712, 772, 4 on node2 (cluster.py:3602, query) 2026-04-30 17:26:22 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'DOWIX93UIA3IBXU1K83VVWE7PFSNXTDL83111BMLOS9L48H7GQBUNJERUXQZ0Y1UOB1D6YLD5J31FJ0NW7QV1HV4YQXTX9NEOIDDF37AAKVTZ2SA7UJ7WU0JLHIW47EA0N6UCZZ00GYSY8OW481K2XRBKKHGK2NHJMIDJHXCVR9X9FSD5ASLYLO2YRKP9NMN9M41Q7OVF6MB3N3IBT886WUU1I6X3BJDQ2QWM0MNFUS87ZNDYBE8PI9S25JN1DJAC8J25CFQH1THKPT6JCMLEF2E7O2BHKR9YEP8L2W6Z01KWVMN3DPJNKFLF9PA72P2G6CZVYK85W9CMJY7U2W1F6YG89D3WPWNRAUAZUP1WCMOLQTU88LJAJ90P7M7E6KNCY977IFD2BF2AA1XYJBO1Q57ZGVOTL4I7QU9ON12LNJETPXHRSIMOA5QHBWICPILFF6FC2U43CED6EJ66EJ1JP8FHZH6T5KT576HHM9PIBGY7FKPK6U4PJ9L424BEIQ7ZCZ9O1I1AJVAKYQ0GZ4XGKEELS4SZBBKE1DR3ZYI50BRG96G3FD63GURLY643E5T66BI1SBRL6ETDALY6Y3QT6R6WUL8G42DQRQOTTDH2XITPHDE9HBK0KZMH5Z1Y2SR6AJ8FJ4L1B4BL9SWJQZQ9PI1NVP72V716EWC65J47XNZCYIEKDTE149GCJQRKUEUITILV0FRI7JO71OSCNVDLGE07ISIJCMRTC8DNN5CIE60KX8X0Q9J49FS87EU8V5SCLN4V1V8IOUL5EWC8KDE',[0, 29, 970, 615, 563, 115, 647, 624, 402, 452, 949, 478, 554, 878, 335, 537, 907, 924, 65, 750, 27, 851, 606]),('2019-10-11',1,'U0K50MJ3B604KE5P66DKXOU2UFN on node1 (cluster.py:3602, query) 2026-04-30 17:26:25 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'V005P3OMVT5T0UALCZXNSSNGSUNUM5QU3NWZY5OH5KNVGUTM1KPP61AGNX4FZUL4IY6CEQ523H6AA2YQ780BSCE7EZHSMIMF23VON4KNSKZ',[805, 512, 793, 881, 583, 683, 2, 916, 309, 224, 851, 199, 722, 573, 29, 251, 117, 10, 512, 5, 632, 4, 438, 359, 520, 181, 789, 167, 942, 469, 73, 458, 533, 287, 648, 82, 141, 591, 380, 250, 205, 637, 168, 899, 987, 721, 301, 38, 782, 955, 907, 799, 709, 528, 306, 715, 710, 436, 981, 137, 842, 671, 390, 941, 51, 489, 637, 549, 744, 533, 591, 236, 556, 407, 461, 108, 348, 988, 500, 63, 709, 30, 515, 857, 613, 114, 562, 231, 353, 630, 530, 713, 245, 856, 63, 588, 859, 625, 884, 25, 973, 896, 222, 796, 573, 291, 10, 778, 229, 463, 359, 139, 354, 419, 862, 960, 856, 985, 963, 685, 256, 751, 463, 735, 561]),('2019-10-11',1,'50OFRXZ3YUO7YAQ9BD37X8CBL0AJ0K7VLIN7DWM6G8D77S8S0GBR958LE90IC02HMHVGCT75NPUUVJQ0B4CW1XTR9EFT8TYUY26IQBURSG1KG6LH245GEQS4T3F9LUDGMRGSKUXU3ERUPTS922VWRS4VRCZR8EZH56WKMWZSJ9E79LA7RGK1I',[167, 489, 306, 946, 952, 7 on node2 (cluster.py:3602, query) 2026-04-30 17:26:34 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'CJC2XNGKSFSPGOLI0V3EW110OBV9HA31PRXOQI1L3ANBCBI4YQFPTX7BP5NH3QY38DVR1BAGQA0CVDEMGYH2FYO64KLVIGPQKYFY3XVGU6BSS7L5YTMTZ63GGXUEA9MHC3PSKZC4ANLP2A006RNYYC8FK307WST9GXH0LOK89KS5PHRGF7Y7RJG5HW0WF9ZT04ZW3EQW72QQB1GS3SMWCK7ONPFVTVNJAZ5JA1LWKMCGGFE5GU3KUKX3GAJMER74JEL7AJHJS1BAGR9FT7JD6XGP0RYQG9COX0HJL97IJKFG81JGR4DK2YKV1LWCWAFQOAF6E1JH2R004B2NIM4D0F1ZBMJRI234GEHIJG2917T9XLMEMXOY3DWU01PQ7JI1WALW4FZGNTWCBE3DR3GUL1NAOZFCUDE36LVR1LP2C4LERRAPO245V3BOS66LPKZ1V8RNTA3R4YDFB4O55AC6PA32OECRI7CM58XZR39ZNA1LC9TFMDNGTKO1PNQ51ZQUPZ1HCOQBH2IODLH18SBEJJWL8BG9SSYCDPTOVBV77IABQ30CB44O6LEXXSCSP2FFNZ2EE5C3L6BP9UE9EI1TE52COXXW53TITJF3VL8NK1YDYS7CCRU8U2SKCY4VID0K76SYUB4IG1PW7YQXAG9OF5BQTBWKBCIA2FRLZXOGW2DSJDETFQCZJDULAWQ07DQ31X222W5AMU6X0RBQ6MMLONA4U7W9Q6TX82UHT9XDX6QWPQTLIIT4ETZG7VHNEDXPLP3IX5KKLWX3WHNVZCULLJETQRJN2E6FGEYY848PY3E27V14DPG9LMW5W8FVSLSBYGEZMHJ1W48K70W116PVRL8O4X3WQYN0KMN9VO3UDV4DD03QW5XH3L5E1JFHBROL9CMH1QLXA4JGM8FTGV9M70A3L5HW2VDWXV8G8Q2KBLCKB4X0I6 on node1 (cluster.py:3602, query) 2026-04-30 17:26:38 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'MIGDNCL10C0RMQ2SXVH3B6UZAWL9HAMWB5VLX276G9H16K8IA0LSUN9VIUJGJ0HB9VJFVSJLP9TWQMSBJOXNA0QI477RD5XR1UB1GM8PKKWHD25ZDTVSAL802MN34JFHHJ4A3U2RE8C24LBV34CAQFZDTTJCRXQXKWPE6YMDIY2WNIR3UEXUNDF8C4BEUKYWLI1KBI68IXBYSX04ORQ8BDM9X1O2A5C8E4PXJOR6XFW97KE55NCQ7UJ22I1WL2X663OODIA9NHEU8M23OS42RMU1CANPK1BHS4DTRLASTVFYS617XSW9VOA1BP3S9AI9WP8567CHEYVKNOUBSDE0N66VIFZMC0ME6LV7U2OUHXF9CXWTLFZYT4Q26LFRL6BYG2QP0ER8RSH0TNG1NL1IIOCL9ODHWAIWMK7E0F3GTKUAFL6WWP6HQTU2SYJ7ZJ4YSC9ECEII9I6L6OQGPG42LFPELS3BZ5G4R572C87RCW9EOFFN6QS9OOFAPVMKL8JEDGP45HJHG84CYF6MZ3HIREGSJMF4ODBEIEYBLQZOLN20ZRS89GWK0ASWMO890V39M8PC20C2NQ3EOXTI9JCGBJFDBUYI5TM0CYK0MN6WH8PCAQOZIO68E73B5P2VKY27A5H0574H7M0DTC6217390DT06CONEXBF4FDDHLLVMEB0U5HNPJU2VRE6LRXSH908RM80UK0HAFU8OK18WAUEVSK5VRM972OD59QHR6W3N0O1746N5D8RLVNHI3DXUOLO5K28SU5HSCGPWRI0AEBDTYN67CTNHET8F869NZCOAMAYLR6CWKOWJBPB0LNXDZG7BTETBE89A6G66BUVXRDW70JETK5VPJK6NSP2I24CZCOOAB5082AF7GVF8MSL0AYMGDEO4Y9BFO50ZT6OBB4V1BT3EY5KWWI62ZB1X9EMG6EACMRELM on node2 (cluster.py:3602, query) 2026-04-30 17:26:45 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'2E8D1ICEW0TPJU0GS3ZEQR2MIEC3LGN6QR2XAXMFJLAJ19ZS9SA3G2EIBVBVOBJ1QER5DOU9MNJR4VC01G5DIABU8Q8ORTH16M4JUH6RXCX0HUP5JB6BHLKLUVD27',[190, 13, 571, 749, 405, 198, 498, 358, 304, 517, 288, 817, 328, 833, 626, 364, 890, 243, 686, 145, 239, 272, 235, 936, 48, 417, 67, 975, 561, 448, 140, 393, 600, 676, 880, 834, 868, 528, 427, 743, 370, 498, 631, 359, 304, 514, 702, 925, 703, 856, 18, 879, 25, 246, 264, 331, 543, 162, 278, 375, 818, 39, 105, 478, 965, 692, 48, 189, 936, 983, 484, 560, 422, 654, 729, 485, 544, 914, 179, 157, 458, 295, 801, 710, 34, 681, 890, 937, 89, 271, 694, 355, 429, 603, 952, 677, 176, 680, 42, 184, 352, 197, 978, 581, 738, 240, 376, 288, 880, 27, 815, 750, 402, 937, 43, 396, 963, 167, 668, 284, 928, 487, 887, 82, 573, 293, 414, 970, 784, 413, 860, 723, 450, 119, 950, 257, 12, 844, 433, 388, 999, 250, 213, 597, 287, 520, 279, 246, 166, 404, 239, 366, 376, 823, 620, 272, 338, 841, 231, 962, 829, 32, 212, 120, 604, 271, 964 on node1 (cluster.py:3602, query) 2026-04-30 17:26:49 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'1I8B9GHBV10KI0GK5HOZYRTKVA0K4R7IGBG0LAUHWHEQQYNPA78OQ7BNJA4M5BQV10Y7YTCHL3CT6A0RBS19VDELX2V3JSG7RJ751TKHUMF1X3Q4ZSY99K1NXA0N2W44ME6EAP9DV64UW6J0DRZ92BDJ4A474N98OU1L1H37KHY34341N9FY2M8YHHOPYUOH5K81OYTHSNBD52DB2AG556SBDDNWSJQM5HPJP0168JG5J2S4BZ1LKO3LAGZIIP1B2ZXVU746FLUSSYDWORYWBV3MZIUJRO1SGQT32ZHGSBJMT3X59WDORGBB7ONZJU8X3RUSRGOW4EHAKGQWWT2JDRLOJHN9Y58LKKVI4FZA7NU7H',[624, 248, 880, 218, 650, 119, 245, 362, 695, 147, 65, 880, 524, 83, 399, 127, 791, 164, 362, 404, 406, 54, 967, 968, 362, 613, 474, 828, 736, 431, 638, 62, 58, 600, 38, 857, 314, 15, 186, 171, 555, 794, 115, 549, 622, 991, 457, 81, 468, 220, 695, 659, 398, 332, 621, 475, 467, 796, 506, 771, 858, 366, 231, 209, 867, 666, 201, 200, 180, 315, 870, 275, 878, 817, 668, 932, 621, 139, 784, 408, 867, 207, 725, 549, 309, 244, 239, 774, 559, 280, 510, 367, 30, 120, 912, 550, 611, 606, 341, 823, 177, 425, 852, 73, 221, 397, 12, 608, 718, 965, 614, 408, 818, 973, 482, 163, 41, 918, on node2 (cluster.py:3602, query) 2026-04-30 17:26:55 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'9V915HNVS55CHAUVDJYW2H5IIO8Z3SF6XT2ZKNFFL0LXRSSYP2J2TJZ0YPDL80HM1WWST5391CBKTLPGSZJCNCERNEKWXZZ16QW55TE5T46N5GKGF4CONZJF8ZP67CEXHYQHH8NORRX5PF5DTFH3GIOD50VXT6CV7S59QGCRF35XQMYV9WJX534SPUTOK0F0P6N30V9M7TD0BU0R1HEMDQ95UVFTC3XT0YAAUB8ONAUGW8TXVMSHEIH0NRN9CT2UA5DF0JL8AVVMNLD415YNYEPTFG5YCWL2XL66TEFFIIKR8GD82E0FFSA29UU2Q98PZ77DJSZIKUVIJ03OJCNH44OQ093VOEHXYJI1S38KNSLA0JQQ1K1EFF3U9D1QNJINF3T7COPPZ33X2ZXD4VU5AH9QNF7J4XZXOT0NPQQ9FERQNL6F8JZ3MXCDGHK9PWKC8F33XP3BOEEK988FGYJOLBO9TQUCVC2IBQP4R6OTYCFETTNYF7OW78MS4TA38NRR7JRY2H66NJLJ2L08I9RPQDAMZ7D8BX0GIVLD72P6OZ32M4VQGQXHRAK00AKUHMZL8MEGSGZ6DJ21DGPQZ92QP1Z6Z5GTO31NZ7115NDUW6MSIVKYKVK0ZITKRVMTYY442FWF8NFK42P5WOOHKJT9SVVQKHB301WIAWKZ0F0UFELHG1XPZRP60N3FAXVX5W76LTVHM00V245QPK52P3VD0NDTRIYU6MFDZLPOXPLMP4BDTQHF4CBZEZXJ5W8E1N2PCV1V6298FCLLAVSYISID3GV3OOD4A9PONHMYZ42EBLTSVDU97OC7QLUAL8E7QEXBF2F86H6U85D87ITWTK5PTM12SCRNIFJX9MQ131NW63KJRL9M14K0MHIK3EV1IIJAX6CC2PDCN152',[540, 213, 717, 169, 816, 827, 66, 6 on node1 (cluster.py:3602, query) 2026-04-30 17:26:58 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'YB7493R0PP1WD6LMGOB',[435, 324, 739, 149, 304, 795, 110, 382, 992, 282, 365, 525, 168, 731, 625, 948, 11, 734, 44, 372, 958, 86, 458, 914, 471, 576, 731, 375, 351, 718, 44, 437, 365, 289, 10, 239, 469, 579, 271, 809, 367, 752, 342, 971, 680, 338, 265, 31, 458, 672, 840, 80, 110, 298, 41, 91, 49, 387, 26, 942, 742, 677, 125, 336, 98, 411, 672, 570, 896, 283, 642, 592, 402, 292, 16, 18, 767, 138, 713, 106, 43, 195, 357, 815, 976, 275, 983, 865, 952, 696, 248, 864, 236, 237, 381, 29, 885, 191, 155, 118, 0, 133, 965, 937, 670, 432, 852, 800, 561, 738, 491, 430, 799, 659, 22, 345, 828, 23]),('2019-10-11',1,'2E10FNAEO8APN5NLK3A820FZ8LT5BTRSLYZPMSID2NBZB0KVLMMB852KYIJVFLQDJRWVMKRT2HBY9LVXTGPCMWPH7X8BPCBDH2LSCHV2KXLFR1MHXI2OIKP7LCR9RT3A1RMTKPTFUMTFK926NSOSAAY1I0AFXTA681EZM8K4HUEPI50J5K4X1KU5UD1XAW0ZOQL8RUPVXTF80XZU8D6MNMAUNQ4GIOBZR0C3C10ES12K12CUVEOA2PSEQ5LHZ2EUMAKGBZIL7R1IEEU9AZCB7BQZCLOBJ50GCT',[143, 658, 956, 201, 441, 937, 966, 759, 545 on node2 (cluster.py:3602, query) 2026-04-30 17:27:01 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'294FQU95SGMIXASFEFI5X7DK6RCVL5OIY9OBHS6E45DFLR5PAL73XCJHMXI88ABXDXCJUBVVGW53TKA21647PS0EC7O69TH9XOVUKBBGYDTDK7Z23NQEUKKDT18HZI480Z7SQIQ969FLV3HH97JP240YKIWVW6J3I4CC6BSIYXO2TUKIE2W99XOJR7RZNEWQ81S10NWDBS17BZKA6EM87N7XKWA6TO3GC0GYJNXOJ0F6IT5OVIPHGN9CA8VYVFFBF0D1VUXVT8M4LQGZPFA6719AWD269MY9KNZOSJVFWVURP5B4YBGB9O81VYB',[920, 205, 216, 342, 329, 771, 81, 493, 677, 270, 314, 396, 985, 348, 123, 284, 220, 81, 12, 981, 415, 23, 33, 760, 902, 67, 765, 500, 915, 907, 516, 628, 89, 853, 315, 203, 211, 557, 310, 289, 879, 365, 718, 631, 984, 250, 163, 51, 304, 684, 192, 372, 427, 683, 830, 997, 139, 200, 239, 339, 545, 458, 261, 84, 443, 79, 173, 998, 327, 901, 118, 729, 93, 253, 903, 848, 670, 467, 78, 981, 900, 264, 588, 829, 435, 41, 352, 708, 49, 278, 969, 882, 586, 543, 998, 651, 424, 992, 832, 847, 901, 16, 0, 981, 459, 159, 609, 39, 151, 669, 635, 348, 296, 2, 921, 832, 423, 581, 851, 185, 148, 841, 16, 261, 186, 979, 717, 204, 827, 417, on node1 (cluster.py:3602, query) 2026-04-30 17:27:05 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'C86D8WME37HWV02TMBDV78BEFWO979Y7JTQXUR4ZPN9APO1RNR99LDCH73X8MPNVVX08ILJ7325PMBTYS3CF4Q6HEKLCGZWCTZUQC03LAIVISYRI021AH48JFZ4FIO1WNLGYQZRCMTJFG9IM675340K5G8146EXD73J3VT1JPYN42CNWX429A6PJIYMG0YDP9BRE3ZJTI46P6CA3NUCG8ZXGTT1ANMI6URNGR79IQRMCVNMQLUU527FE9ZNC2F5GYBOWJ3D8WL7L9J8PMACQXSV3T912X9PYPDPXV1H0PRWAXZPJDE1COBGY43NAJJ7KG3UWDZL5IQRX2O5RL4TDKID24NWKTYPNBWTIWFFI3837ZCSH0ZP483FRK4YMQWM11JAS68WP2VPAENXEQ0K5EBRWXFTO91IL42YQ462NQNWML94H09U8X7OCRGVVWQVN5ZITHYR50TFH97A7KL09OGOX9LXW987CZXI46YZFBZMCUEQABQ9C8QWCFJYN8OC836EIUR7V9L73XMI4Y50ACPXYNFFUFV3CL4H5RB4I1CEM0C4JZ0BQLZXQD9FU87UB0OSGQREWBN4KS7ZG86P92QRNLQ831P20Z2TQZYAA94T7100XC8D5QRFPT3BJC28GYS71OU58QSZ8CHCP1HDQX3U05O1X6FJES9CA3GJ7GC9FGGVQYZEN1KT4KUFBR00DS88CD3KUZYSHKXY9YQAOB02FMSTM0NLVIGZH7JZO0Q45QMWKMO7IDX0J3JB3V71QLXH2X301UT5AQCFLZ0LL9IO55807RE7K08VMFDBHO2Y34Z5LJK97DDHRR1RV99XVX8M5PF0IXQJJTCNXMXNIL85XPKFR132X5P1R3BU3DUF76S26HU85LR5NFS99VHS0HCQFPD37YN0W9LJFD5FIZNKVPNSSWXH9OPMYQ15HED84JTTTSR on node2 (cluster.py:3602, query) 2026-04-30 17:27:07 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'WBFL3TORYKD8BB1HJ8M8A2Q687R9IM85BK6J963JZSKX9MKGLEUYGWAOED2GHLFHQYYSBRYOY0AM51ZO6LLZ1EYCPH467047SW0IUFFK5B8FH35G6R0IRDBPLKQ0V9MELS3NILCLUBNAXLNCONKT29A99EMGC6FMC0ISL5NERKR6RWS3400W4VUH1JEQOBUPX2HTK6D2IMSYRRKAI61SZGSIAQ9YS30IOFJIR3PM6P8E1OTAIP3ZJKG904UVQNJFAQQPS1D1GLMS8V',[275, 525, 290, 764, 345, 785, 331, 150, 313, 251, 476, 550, 160, 781, 427, 665, 912, 187, 629, 649, 930, 816, 983, 168, 526, 458, 420, 217, 266, 770, 497, 68, 267, 795, 671, 330, 991, 431, 1, 450, 660, 963, 385, 355, 353, 113, 157, 494, 300, 421, 143, 370, 512, 111, 28, 291, 363, 123, 684, 748, 414, 169, 417, 933, 882, 566, 421, 532, 438, 499, 76, 704, 283, 655, 912, 334, 87, 882, 931, 604, 410, 797, 734, 108, 617, 214, 646, 962, 75, 379, 706, 952, 596, 712, 849, 341, 87, 730, 414, 287, 41, 747, 927, 709, 853, 536, 957, 390, 461, 573, 417, 392, 393, 362, 0, 494, 745, 410, 23, 746, 222, 875, 252, 564, 914, 879, 589, 152, 379, 867, 662, 619, 608, 544, 633, 222, 482, on node1 (cluster.py:3602, query) 2026-04-30 17:27:10 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'2UY1D6T075ZQH40ZN66Z0LNM9KX89FGKM733BHRZH9L1H0RIDIMLVNROEDBXPNF8AA7TYYBCZGMF6XJ8L3VZ5AB4BUKUDNWKIOEEDE1PU8FVXIEI8SNJGZVOQQPGTH52G33ZT3E0ERGF5JEFN6H5GQ3YA493J05PCPIUYLOHCIVLSQGDEJK5FYPJJXYCK71RH75A6U2XWFECQQDSGQVYO0KF5O8I9AMHML7XTVM054Q5LB2SW89AC2VZRRZQNXC57G6TCUVOAPTCAZQGRHH28USSIOX2TNQTYR55ENNASXDLOTWD9WEL7SE8N0PJHTGFB0GWW85QD1THYVVD2UE5DQJXERP30JYCZOBAOUA06N21OZL0BQ957SKKJF0VZR3UVV12GUU7TUQIS5U1GMRMDKVTYKLTT805HBIDCZLMG6SLBD8MXECGDTL92XPYBRH',[984, 958, 444, 134, 652, 321, 966, 117, 486, 209, 0, 793, 981, 562, 246, 895, 691, 61, 891, 87, 753, 117, 468, 502, 306, 748, 328, 492, 40, 889, 123, 169, 383, 552, 374, 21, 361, 365, 467, 415, 750, 471, 648, 323, 35, 381, 825, 900, 683, 324, 307, 543, 947, 840, 157, 770, 892, 468, 399, 236, 504, 919, 494, 395, 334, 147, 10, 7, 600, 767, 348, 411, 855, 412, 809, 646, 512, 102, 662, 188, 815, 315, 10, 14, 641, 940, 36, 116, 802, 638, 205, 843, 105, 482, 243, 661, 557, 590, 825, 66, 847, 746, on node2 (cluster.py:3602, query) 2026-04-30 17:27:16 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'G96MWLPMWRVMN6',[376, 49, 427, 24, 221, 238, 887, 314, 622, 474, 905, 202, 946, 108, 273, 687, 416, 33, 258, 446, 345, 902, 810, 309, 693, 668, 188, 547, 995, 574, 461, 601, 944, 571, 533, 644, 483, 716, 312, 609, 522, 274, 898, 366, 268, 287, 348, 682, 181, 467, 7, 734, 864, 491, 107, 640, 365, 774, 775, 520, 202, 240, 805, 235, 474, 93, 398, 680, 38, 162, 897, 941, 957, 823, 852, 583, 209, 95, 339, 869, 876, 712, 491, 525, 527, 605, 334, 191, 393, 183, 255, 897, 771, 801, 262, 193, 1, 690, 439, 153, 531, 705, 48, 755, 357, 362, 772, 748, 744, 694, 251, 497, 209, 239, 579, 422, 305, 556, 150, 50, 666, 352, 103, 448, 739, 467, 444, 292, 807, 912, 140, 114, 212, 827, 299, 315, 801, 679, 419, 409, 377, 518, 193, 545, 153, 437, 141, 434, 220, 767, 267, 460, 860, 378, 239, 932, 52, 629, 597, 380, 731, 886, 399, 120, 384, 933, 793, 648, 275, 576, 437, 923, 89, 453, 26, 508, 577, 302, 313, 252, 249, 794, 260, 354, 587, 706, 80, 459, 849, on node1 (cluster.py:3602, query) 2026-04-30 17:27:18 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'YJFF174LDL2AXC3ABU8CRJW3BQSKSSV2V6TODS463VML0RQ7RS6NOD9GYOCTWHOZ77EROHB',[714, 15, 356, 478, 117, 275, 467, 433, 694, 765, 522, 283, 369, 823, 349, 203, 671, 575, 275, 419, 434, 376, 997, 446, 310, 42, 549, 452, 834, 27, 646, 231, 595, 440, 354, 776, 490, 462, 254, 27, 375, 501, 791, 989, 71, 323, 542, 45, 145, 899, 49, 606, 575, 504, 430, 346, 699, 486, 767, 742, 902, 285, 552, 436, 216, 412, 195, 439, 381, 391, 696, 546, 830, 290, 489, 754, 940, 795, 564, 348, 951, 550, 308, 735, 692, 871, 850, 489, 572, 525, 743, 641, 451, 278, 619, 704, 285, 619, 33, 355, 862, 261, 385, 156, 413, 353, 912, 102, 332, 877, 434, 362, 329, 683, 478, 620, 487, 289, 200, 926, 493, 38, 373, 748, 237, 832, 15, 611, 736, 642, 791, 12, 959, 724, 673, 118, 911, 507, 447, 350, 789, 487, 369, 745, 961, 452, 788, 708, 361, 490, 24, 176, 416, 547, 892, 468, 167, 549, 281, 734, 260, 61, 266, 669, 799, 578, 30, 18, 761, 47, 805, 894, 296, 593, 448, 23, 290, 428, on node2 (cluster.py:3602, query) 2026-04-30 17:27:23 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'P7POAP7JUN03RNYKBTDPMHAWYKUSD147LZN5VP31Y2WW14QVSTWKKECGXLXLMC3W8B33ATPQ5KQJHSMHZV31195ZGV7U0S6GE0JLNS00PK4G87MG8TVBE5TTFH1UVDHE2LSYCL6K5DC72MGL65DHAZ8E0RAZ6H2K03Y9N572TE7PT2TALQVRIYTLE7Q05N28FTXLG8L1RMEDVY7JXYX111STAQT9XSN4C8GQMR9BLYMV1N5V04BJAW1769OKI2CVH8036DG1N77TVXGUDBL2GZ6X8DON2AZEHCENXIB1F8M3C8J33K0PE4BLUGYW6RE62VHKEBN2A802Z9JP',[130, 875, 92, 156, 882, 801, 850, 442, 935, 595, 112, 476, 85, 8, 699, 519, 384, 660, 524, 104, 69, 785, 536, 596, 306, 535, 887, 390, 817, 970, 173, 201, 144, 927, 52, 45, 694, 913, 759, 769, 227, 409, 712, 893, 54, 757, 68, 82, 453, 417, 74, 949, 882, 734, 430, 796, 784, 385, 99, 790, 781, 611, 849, 761, 573, 612, 310, 45, 557, 961, 25, 175, 724, 655, 213, 715, 323, 106, 63, 694, 482, 354, 211, 266, 307, 415, 296, 46, 967, 37, 275, 695, 848, 599, 584, 462, 221, 508, 293, 353, 365, 480, 675, 150, 751, 992, 948, 428, 580, 700, 530, 412, 235, 999, 689, 616, 161, 830, 215, 665, 662, 827, 494, 608, 953, on node1 (cluster.py:3602, query) 2026-04-30 17:27:26 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'MYGPHJEWK203N45RBX960ANLSULZP44U2SR646TV8A6F3XG8Z02SEILY5HZ99RJYKV4OBYFZ2EYGT0N7KODN6LWCH0FEGGSE333TNQS5HTHU7VABOB4261IWBWC0269TGG4HOEG6S2VK7C3YCTO4LYYEOCG8WPX29YIORAKZQK8P36Z9KCKZJQKLENECW0BZT0JLCT2RULDTIV144U6Q1LM81TL0DPF8EEVIMVMF6OV1PWU2HWFXLDCRQLAI0H616T6R5C9M2UXBNCOXCZ8O64HFCGE1PJ14DFP1SNSZ9XJ4JQX4I3TDNEIEL5G6ET1BRUCHUTLD397LM9FLHYJII8NV1BH3SW6ZM2BDRXMHUU6X7PE2XUFZNLMNJ0R8UPEU671NHFYAN9HDKK7SCSKC2JSH7UTELA7BDICUPTK7AT91FH51DVPB61J47YBH33C73997NJT143UD2PJ2MGP9Z3T6CCO05NAZ6MY51I5Z1XUTSR11WBZMYP37NT74CI80EVHJL6GFA40ZQBPRHXTY6E1AH9UGSVBF8WPRTZS8LNX4A2OBLT8T68GPXOZSZC9P4Z3AE5BUP6W8DM197F6D4SEYP2640AWWN9NE69',[672, 0, 793, 221, 79, 944, 277, 375, 959, 175, 671, 28, 343, 919, 697, 699, 588, 106, 875, 989, 738, 875, 456, 81, 542, 224, 504, 397, 840, 505, 467, 660, 947, 723, 271, 561, 853, 437, 97, 641, 750, 957, 844, 985, 864, 606, 220, 0, 236, 411, 862, 441, 176, 967, 951, 381, 6, 370, 54, 877, 568, 187, 403, 865, 269, 798, 302, 788, on node2 (cluster.py:3602, query) 2026-04-30 17:27:29 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'AI27DEKA6WW0E98KIO0X0UGB31MOW7OSKOTC0Y4PXJ5E6L6RCVXHT5MPG5IAXVIIMQ3CCUD8C66217X1QWBJP9V5RB7JZQXF08GNANLE06X0EFZY5R08PCD7AV8P0ZORT6KPDU7QTBAB7EBMH4V88YVEH0U18GY62W6B5T21CIU0SHDGEJ158TT5FHVTS34W33LVNUS15B1YJNB1MHU6IWJRCZ337CJQU5ABS14CTWY2K1NGBR19XTM1UMXCR2QWWPT4N3BINMMPXCW01X3QWTB7UG3NRKGNA0WNQQJUNYDDNSHASJT84TOKD65LAKNWB5GHTCRBJ8LLS65IS4G4RRNZ1UTUI56954GDQ63YPZ4DM3IQJ7F2X1MUB43J6G0WH0UIAONX8LRW2MEAN17WOCET6CMTMAMKA29LTOFPGRPS35XL3ENS35UWAPVFO939RQHSWCRKZXRNVU6ZXQRCJ6F3PAH7V5Z8C9PXK40QJVZ1W5DEA9B2Q9A7RXIM7VE8MO13P0FG86295BTNRYDO3MPU8KXY4YI64BFBCSUHZ4AL3H4HYGODJ7RY149QNH7R05EOBPZXFU4W028PJY2W5Y2Z5WIPF8TBJLJSV9XUYZ53L01DFOZVLT3J2YP9E6H6N1YSVDYM76HECCRBLHCPAIDRV63FIXYSDQ03UIFKNYUON6HADEB2JI3J5NM78I8EYPIZADC6B8G6BYSURZ6QZACF3D2CMKVP3UORW854S0GEQ2T09LG4VPQA6LFQSRKKD',[180, 616, 140, 532, 138, 927, 799, 580, 319, 939, 988, 262, 819, 197, 790, 877, 113, 525, 168, 455, 485, 963, 589, 246, 362, 743, 587, 592, 686, 197, 386, 923, 290, 647, 401, on node1 (cluster.py:3602, query) 2026-04-30 17:27:31 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'MA78VDMENDYMHOBMHS5VQZUKMXMU287FWLPPX4B9P2CU2QBVMHWL8NSVSWN8IQNOH735D014C0OU6NQ7H0UBF5F89WZJES13BPNGC26SWKTQ1QOV1JMUA9DTOKR6ZENKHQ052DK2TPAYBRUPU1133OFWZ36XJW41IEIUH6BFR3AXW8CFSHWF0ZS26UR4LVI4Q71UX5W1VRANFR51V20JILWO00P20ETEZAUEUOE9QQEFV10CS36H0SSBW01UNJ4ZDIZ8GJROU9N7UB77DVS3ZBDV3OFW4JEP4IFISXIBUVQP4AR470F6SRZZSR0OSJ3QVPFV5HBZ6QH54Z67X09CKLXHGN5KG11B4S8T52GAGAUPOMYGL2FOX6JS15AQN3JAU5I3FOFWNTW9Y8ENULMHXDUW7BW0TDB23GHE9CFU7PCQ2FQIHMANLLT4PBCOO172POZ3EVUSD1ALNSA2O5GFI0Q1QG2BYWJOCOXJPNJSUQ3QKVX23EAJ8VTPFDEIKB6GFOCNO9LAC83OCHEMPELKCYCFB59CLUR0X2FZBE07PUDRL883XCKZC1IYAVF0JP8OC97TWY3IV1JMNLL4EXIYY1YR6LJ6TUDVH0LXIKEWU1MKI7YTSX03X3DKSTSINOW9UC2TX8QJOU5V8C7YCJPF3E5KCCHE1JFALQ9OF51ZN9X4D5UCQUUCLYEMNJQ6AIHCUC1R44ESQ4G20I7RGQBDMZJ1TEQ87UNWVXL5WNZJW1177F0KQRA99YTRFHPWY6K6UJDP4NHRH49PTQ2SVSVIQK809H09PVW8WLIEIQTBOTIAIJR5XW6TR2KA6PYIS82YU49Y9H28BBV6ZHQ4MN8O85WUE03NWHV8AU1QHNUZ9QPUKR6G3ROOO4052LL230KU5TFV3QSUTBD4K3RFEB0LCUB8M1QAUH45L9V6LHW7KIBYMI2KYV on node2 (cluster.py:3602, query) 2026-04-30 17:27:35 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'PB9ME0XMF7GWSRZT2V2XA7V806MT9EZD1FKZJDVLKW0KTM5204YD6Z6ER9KOHPO2OPAARHR4ZIZIKZ3Y9WNIN86NTTHH16ZKE1N8QAHA4PGUO3GI5DI4D2PYXQSG79SZW3SH4HU7OCDGY9AHTNM4APEUKQMCKB7U5GOWB7RL01YOOHUK2V1NGOF6V59IKECZ95WGWY3FYTGBF5CRNS32JL8SF79Q7PR1D6VK6JQP0CI6EYXOFFK7JCMYOCGWJQV6UB4EVJWJ5ZAED5APKYWH30NJNADBG0CERGD3SP4UJE',[127, 907, 497, 64, 191, 228, 927, 170, 575, 970, 582, 989, 302, 637, 511, 348, 361, 312, 629, 366, 504, 287, 458, 364, 752, 272, 401, 378, 89, 423, 449, 311, 413, 8, 419, 949, 493, 982, 402, 712, 865, 240, 85, 622, 485, 426, 317, 684, 373, 798, 240, 307, 657, 889, 426, 124, 572, 590, 322, 420, 280, 732, 693, 418, 690, 895, 891, 340, 770, 624, 257, 582, 467, 724, 66, 257, 201, 793, 361, 864, 347, 752, 858, 799, 521, 583, 219, 90, 818, 540, 987, 189, 851, 173, 653, 702, 103, 814, 619, 269, 607, 731, 303, 238, 464, 244, 365, 559, 301, 179, 455, 725, 511, 45, 427, 521, 141, 209, 45, 575, 351, 421, 47, 129, 223, 891, 774, 426, 566, 45, 324, 8 on node1 (cluster.py:3602, query) 2026-04-30 17:27:35 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'6YOEX01ZTGCCYZ248BQPW1FN9J23VXBRWF5JR8HNZQBNVPABYIQG4P8EYY5JUTGQ3RVEDCTY5V57Q1OER74JRGJEYCNXOPCLUWLP2PB0N8ZPPHEUMCIU83VHQ6SQSAIJV4MWAUBEEU3BM8L7TETQVF9C5Y0TXHQE3HWCR0FII0G9BAQYJCEM5AJZ28ASUGU2E1Q0L0VR6E1Q7XKCBUPCNJIH374J7F282U6LM488E9BI5353ARDS65J0YS41NL5LSQUU8R80SAZQSG3AU7MWW6J18MWM94UAV1MMEAB6HLO1Q0XL1O2OFV8HCX619WLXR2QCXVAWA23PFIWOT0E58GNNIU34AO56DSRYHCV9OESVL56JU04AERVYY1ME65BSHWJCWWE1OX973K7FHU1YMFFQM68ZSOBU51GRWQ9CD9J8DQ68YKZ9QJODE3D6FPHZCV9ZMR219AWJ4X23CNT5AZMOS2VXPYXEJFGDK7O32IIWA1LG691JHK2LYTODCJN7OTXUV8Y3F37YML99S9Q1HWM17OZ4FGMYIKDA7S9D0D2L4R7EBTPAR0VFN2ZIA0Y177K3MKD5DLMNPC06E6RXKJZGB6EAJT0Q5V9CH20NAF618FH57RGRWI46RAQGQO8NGS14NPJEP1MWBIJBB4F6P4WK80LH2PPU7GV4EN0M25NJ40G826D1EX2TL27Y1SKEQL1I36NZR9ZK6RISOOEGN903L7TGFHCR9RBSFNZ4G7NYAER398C6DI0NPHCZZCQLTZ4N7UN8I2UQI3KLCBQPTUTPEAZJGPLTVQC144INA7TK68TMKGX3YRHPEMMN6HCY8D5U55TLT821QEFI6RSF4TPQUQ9CGNFHRUX3CIGQICR1GGO5',[354, 633, 555, 829, 970, 466, 85, 905, 918, 909, 215, 167, 981, on node2 (cluster.py:3602, query) 2026-04-30 17:27:37 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'MGGGBXO48GQ3BSS93Z7WEIP3',[838, 659, 714, 366, 17, 701, 483, 992, 152, 140, 294, 129, 794, 911, 533, 189, 286, 485, 24, 793, 197, 507, 557, 638, 885, 319, 980, 829, 768, 679, 52, 44, 681, 882, 441, 240, 876, 417, 298, 502, 506, 618, 691, 487, 39, 734, 276, 376, 15, 334, 808, 510, 0, 484, 18, 296, 665, 971, 271, 867, 699, 623, 109, 632, 9, 931, 265, 126, 354, 700, 385, 224, 572, 661, 371, 583, 152, 544, 375, 643, 887, 312, 590, 810, 860, 490, 181, 924, 362, 620, 798, 640, 398, 73, 922, 431, 767, 752, 297, 748, 247, 717, 115, 794, 555, 793, 532, 177, 878, 276, 757, 420, 520, 590, 558, 242, 306, 246, 86, 704, 119, 267, 321, 287, 138, 755, 853, 154, 176, 744, 335, 917, 566, 915, 741, 539, 707, 220, 448, 64, 938, 139, 484, 188, 9, 948, 645, 461, 499, 781, 706, 576, 227, 666, 599, 374, 565, 102, 18, 459, 484, 677, 542, 49, 303, 608, 65, 569, 751, 564, 786, 790, 725, 555, 542, 110, 530, 904, 554, 266, 463, 476, 303, 366, 128, 322, 34, 18, on node1 (cluster.py:3602, query) 2026-04-30 17:27:39 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'R5H8TTBUC0XHAXLV35ECTVTRHY6P0A9RDFRWXSG09UDRY2S9IMTC20J2RWSPJC7T82VH1HYXIOH7SAQ8YFOP75TRGLI7GUY5LYEI5ANXP6NG7KLHAN7MXBEVJ0F4NLGK4KON9VXQS1JK0DCZ6ATLF26EZ8Y4YTQ0VV0Q20DZPUR848CB4A4R5SFWL8HB23O66P50TDP6Y1HDTG12KN3CP6NZJOQ8HG7Y1AL32Q5O82P4YPW5G2ZSQTBVIGX6SWZ6M1WVOYNIO2J6FTDUUERTOZOHDRWHVJS5UQ9UH493ABYCL0G4IT31CTQIJBJ0XVZ7NZC8H1SRCBYI9R7IHVL',[842, 582, 907, 779, 291, 942, 432, 52, 929, 27, 186, 547, 831, 937, 326, 57, 99, 841, 403, 363, 313, 197, 379, 231, 363, 290, 319, 265, 618, 515, 402, 593, 196, 245, 273, 783, 188, 66, 465, 93, 650, 116, 414, 986, 319, 391, 157, 526, 837, 755, 930, 732, 75, 429, 954, 691, 60, 282, 306, 529, 440, 351, 40, 794, 939, 32, 883, 732, 933, 732, 978, 306, 841, 357, 946, 956, 294, 766, 86, 69, 422, 173, 808, 829, 590, 848, 588, 498, 709, 353, 811, 778, 432, 309, 874, 53, 73, 266, 155, 190, 686, 283, 353, 906, 300, 331, 4, 622, 711, 934, 33, 207, 739, 859, 714, 778, 48, 292, 960, 252, 443, 95, 312, 913, 77, on node2 (cluster.py:3602, query) 2026-04-30 17:27:40 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'H4U3JGE6USBHY6X4M5XBEWJETLDQADDXQSV3IN8M18EWXJ1W15UC7C28AXQPIMXEVAEE9ONDRPXM5VBVNG7512LRILFV7CRSLVGHPCW65YB7UUP7UU98VPDQZF8FVFSM2U7KL8N2744VOYQPNJIRO6J3MQDRQ9VDLWU2VZRFLK5INGID250822ZIMD349QZJKPYZLQ45PVNRORJ9H9S309HS9N1I9RD35C2ABMMRX5VJ3C84V92VPTQTXE7PLI86305MJWKBD72KG3VZUXSBKAVW11UOQZH39WUSO5YFDM780G0KDTC3VG2WW8TA2JQTL7KWOID2CZLOWRNTLWN1AWZRURIA95QUAB14JM3P9EGL4HD98NKWG3PRIRXOQHVDWXDWV88TK1ZSIO339CNC372PLAYQF2APCQ88FBR1YBRPRB59HLF2T8P891JK73CFIYX3XPCOP90BRBIGQE3P2WI86B0K9H7D2FACO2M0BYKYQC4GXGVEIJE4GQ1H6SJNQ03RHB4942LN7FGY1J6UOC5E9W0I1NX4YLJI1I86XXO6EC7H5RBW4IVTA9CH8SF8MB40OINNSN3VRRK6TYZFU2YC92W3CA79CRR9GCDFOPA2Z7XI7ZI9VJEGW94B1F5VHZCXI6IQFMPLTAI49CEP1GIWUVIUO4AVMBI3SV7S4E42II245ZYS90GAODJG2FJUQOUJMVM35LS57S3JZEVRW4AQDUDT0N5FSEF9R8T5TIUVPQRN643LF97K357V8J6IYU8V5ZMBLGTJMK0U5DUH3UR8M7JXDZWWY89EESWYGLCR61QG0H4HDEDL30MQJYBZY91G0D01EWDVMK62NSB0V9M7JL5DMYDJ4SDL27MR50AFSJ2DIBYD7K9RFZ9BN6CVNHESK7827ANZVY4NX30W03F0YV4OW9U7XTH',[381, 336, 54 on node1 (cluster.py:3602, query) 2026-04-30 17:27:43 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'36AAU0ADNZPZGE6NMVEJG0UKO2VRTFZFRPE9ZG2JE97TFKN3NG1W1XGC6B2PQMTV5A7N70EHOEO5R5MUW6FSW6Z0U3PXPOLZFQIDQZ6G7L3FDBVTY1NKU1Y53KAUWE4IMZ69B2JCW97SZSM7N7Z4INM52E8XC1XP0OHTCQ963R8F4WWP39TJBU3WMAVQ6U3XBB20PG358262DPUF560WFN6SJ80RAQIX1LQI2OH509CKNFDJCK4TKS2B9A49AUEXHUNVXFVWE31YJKS4QE2136SDBKLR9YG27ZG6FMDUM4DHG0Y3L6PA64MM0NIEB8ME2O1DE4T6L8JJCNPPDRTUZIQM9DJPACW0Y6LTHFRA94YH9F38HFH0I56AK74OH95C6A3WEQDP8XBKXVL29ISNXUFE4F6SV8F9CTGGBWQXE9HEZV6IEBQ214U61FJXQW85WMPSS0X30CDV61OL8YFSBH9VFTYLE4FAKQ28C44FYAXT18B9V5LNL22HE1XOKONGZQHFWDP37FDQ1VDOGQSMDSE3LU3GYXV7HG1SCMDJ355LWJW73A4QTFPOKZZ1IZ5PQYGVM3EY6QJ0JJ41JMHNQ7YOCMCZN9RSO2AUAL03X6IBEQ0HECWQJ580VEBSYMD5BC117IQKV9L8G0F6UI',[112, 426, 763, 29, 778, 437, 700, 66, 586, 833, 129, 664, 108, 710, 260, 289, 379, 886, 704, 38, 376, 54, 721, 943, 546, 208, 734, 662, 223, 212, 396, 714, 542, 757, 38, 125, 6, 666, 14, 899, 467, 251, 210, 496, 147, 840, 99, 532, 499, 759, 26, 277, 265, 643, 649, 804, 635, 52, 547, 8 on node2 (cluster.py:3602, query) 2026-04-30 17:27:49 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'U73I60VV7ST2Z80H8ROK0D021AQPX54PLWVFZL5702GJ4QCHKQ330I3NLE6H3A2ZNEF7ZMA0VXFS0IF4YBM7R3MEBZEZMQI45UE3WTYPMXLTSIQ02G97XNXP4P5CG08TI8WXTN1LE1YOD6XJ3WVJ5UP96X3YTW9E9Z7ELIBU2XFEJOHX7L2R5HXF5AV3ZU6G3E9U7HNJ4XS2W365XMPDU8520DYYR8CNGQH2JBGVRRNRRVO9PXTFGS3GH98012B5L1XVGTMFBEUFSQG27T3NP6JCBWK8700PQHKMTA4OLLFZLV4LO9O3RWLX5EXC8SEU0IG9ZSM451BX5EBGGT82UXQXBCMK0QH5OR77AXUQYJOIDU9WY6JGDNUU3OJUAA8LGGFZF6HBUVJ0THHAYX0THILUTIEFWO5KSX2IUI774JEHUCRFMQEPEFGSL2EM9UEYQA06GD6LBS1GD415U08CFTOM5MR0S3U65Y7NB455IKUS24AF3TYAKBME8MSK8BFQFGKID7TA77Y9YBXT1OE7CHNMM3E1OAVM2VSOAY12A60BEQ5FJ11FHCDJ18MXKCD7UK3CM27K9F03M45BR6G79RQWF4VY5RRDMIX9CGZN773ON9MTB5DR8OZLQOV8PN12BI6OHTGGBUP197AGYEJL015N052QB9M53BY382LQQN4NHLE3HDN8FKMBU9FPO1OT4XKLBM4P72K5P0V4X3OZ5CBL26ZZNZIKQJH8W9C9TO3KV0F8ZF42EV7ZDOXIT3P8UPF',[73, 861, 219, 775, 359, 969, 42, 180, 52, 466, 183, 669, 216, 912, 673, 949, 711, 980, 105, 155, 239, 630, 504, 241, 402, 768, 421, 427, 230, 769, 746, 998, 442, 460, 408, on node1 (cluster.py:3602, query) 2026-04-30 17:27:51 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'YM6ZI2Z5O4WL5KTSMC032H76KPHTDU1FLYQQHNZ5KYGLWDM0Y27CTZ49NNF8CRIUBGSZ8G0I512XBUSUZH0YQGZBCX9IGC3N3LRC5FU8GP9H08B0ASMGGOLY17N8D3COH693Z3CI9Q976U82K32B5YD76PET4V17FYYB7CN6RW16VF0TXU8V1RCZWRKGMKTDCO0SSPOAUGADD735X9DNCE76J3UY8MEERBMMA12Q2G25ANCH7KXH09C44V51UXHK0Z5KVSTZIT21XPINFKM7K3213VSWDR4VRWETHEIGGBINCSGUN4GXELA25NFFFFQY7A1XC2XO4ZDWGSWHLH14BS2XFSPO5JUZK7EPJK0AC1FNH9DBMQV3IW4UY6VZ3072QBV6P8W23GPWQDUJYTLYODV6N55V7Z8NUSHCNF85D1UHSKYPR1B1HETNSBBSE8YAATS5N45EQOAY0LCOIC75SUIV8KECAGIX9OHZ5YBRZ8JUCKFUOH4RP3QESCXKWXOXZW7XS1SV821ZUOWB1XDX9FSR6QOSEQ3XP2GZW15WH0USH7Q9F5I6GVN8D0DKN5WSNB3F3EOIDKKMV9KF9ABAHJKHCP0AYCZI3BNPOKWOAXIYOXHN47CD4SBPKKK9X3IK4LTIPXBO166SN5A3DC1SQ2X6FILR8SVBTP4S43BK11DSB9UADD8G59YL6SN1SQ3U2OA2CA581PGKSDVGGMYILF3IDN1QFVM7V5GN1JT75R8VPT81RMYKNYUO2YSJV4MRL1W5NLS4X7XFMRQMBCHZJO606ELCG33FLS078FRKNBKV8MQVENLZ3740SSCP525WPXJQI1JTU2LGDNCUX6Q2I6TM52K29NOND5W9B4H3V6DS8Q1P03KOXWDS8G4QCK0WCVSKTUOUWXDXMYO4Y9BHFTEI4J3L48Z94RYS69U7HSH1OHQ2NT on node2 (cluster.py:3602, query) 2026-04-30 17:27:55 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'AD3JC7SR2PF2MQ4PO7XMQ28MEWNUEKNY5SRCKGL1NKC3USYJVVLGP84WC20UW4WDPJECTZP22DPV32DKDGM39OSBE60T5HZ7XL2EGI3OCBEXCJJ7KWYBO5R2ECOCXE5VHIA8BB6S50HBVA96BLPE6YJ18CWCKJLHC0SZTR3S8XXKO8IMQK4F11REPJT8LX145WT4UE03KROXT6PW9SDQ99MZK2MI8TGGBQI5IQ0WEBOS23H85I8YNDRM8LWL49ZYQ0RU2KA0CDGWD06784YE4G3UBTUARKLABT3JOODXJZFYY2D4P8TELSYAUGXTPDGR6OKYEHVG35IZMIYTL5V4Z4WKPBH6MERHPJEDSRR7YWR8M1APJVGS5DP9WHULOQ4OHRC8Q7F3FMOYV3D8GQX7RV28W2JQBNGUM8OMLP0BMJ4DOR7GHK8GTGZ0NPVFXBJ6LA25W3FAJXQL5QDVYS1BP75SEP875O7GXRFQ9U5O0IAS3ABXDIAZVY86OP3C1GMG4FWATGTF4XNTMOAWIJRF5G09Z927NUP9VB4XCER3O0OAOEJD1VW4LI5Y55DZCS636PME9J1ZC1SNO9LY07E61MLEHL5PQ0XNAEOEQB3P4DHTGSX5XR9D4ND5WJ8G7WSKZH3YD1B9BX05DXE1CTQ0O8',[136, 890, 860, 82, 756, 13, 879, 121, 965, 46, 264, 310, 829, 123, 943, 137, 883, 648, 926, 129, 591, 565, 956, 180, 680, 281, 732, 95, 356, 876, 2, 997, 301, 430, 478, 773, 799, 402, 938, 343, 623, 530, 465, 541, 216, 681, 53, 831, 835, 67, 493, 310, 93, 936, 751, 932, 485, 394, on node1 (cluster.py:3602, query) 2026-04-30 17:27:59 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'K0A1EGQSY3P85G9LGMMTUAJ5XVQOCK2TDNSZTFKQZQ3EU6XC8ICG7V7BZS3150YWAJM9RROL2BKEW9GLPXDNCGFP37HOD8A9DVRRYV6PJ15BRKERLSF6XEP3JNCCGOLDSHFTOWPWFKI5YKTXLRZPX676055WK56EDUZCII1GRKAQOZ9LF10M2HEIN4GDAYNFL5Y6S2QCR34C8C89CFXXC7NXCWE6YAU6Q9IGIAO9TIKPSO1XDDPC7YW58CMFCDIDLLCFJ8RB1J48GANTFZTHNZR5B91K5R3MRN8J196VCB2G1KKG7FBSHNCMPDPPTRJZP27RXE80GUX2SBMZGALHYWL0LAGZY8465LQIL9W0T7PXPLW14EVIYG8ZD4L9484L3EVUUYKWJHM3UOZ10FTXDPKSFQ3NC1KAVMQJUCYAZVUD8FQE5JFI8W4C3N6MV1YG5IVV1SG44LTCNJ7918JKDU41NSX31Y7UYZUSI0HEWDP883P6Q9629B461LP2MPA2I23LP5H1ABSEZCCPPOPHCCSP9F778KYXJSF3UTLF40GNK1985L00PRWV2LYU96TYNEUTOX52VDTXIOWKX1VHA',[895, 388, 806, 338, 364, 275, 742, 342, 420, 983, 280, 352, 160, 3, 962, 281, 322, 126, 976, 27, 633, 57, 268, 152, 656, 209, 656, 620, 775, 343, 716, 552, 624, 644, 385, 545, 222, 143, 943, 143, 466, 499, 492, 400, 687, 181, 974, 150, 5, 143, 36, 116, 472, 69, 654, 105, 87, 564, 114, 788, 250, 5, 753, 797, 61, 433, 785, 853, 968, 247, 264, 388 on node2 (cluster.py:3602, query) 2026-04-30 17:28:03 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'IQYCBNJZPC7VRFXLLDOPN0L8Z4XFB5PBO3CW3YKBGRZVCIQL4YHI19MGJ2LCE2TDDHVG5S95ZOZMIR9L0AAKDO1HTAH1TBIEEY6DUFV7NAXHVS4X1GOIXYBA17PYQJGOEISQ8N2USPOSQKY71KSEQ8XCSN8KUNTZ3HSUWFYLF1ZI7E74CFTM0IQC91AH9ZQKXW4B5HTA5C2CAPMIOLEN5SN6M515G8VJ42UM4EDB34EW8ZKR6C3YIP8ANP6F5LGTG8B419TGOPBHTX48IGZHJVIMCA7RJ16IK75PFJVZZARZUNDJ4HIL8UW7FMTR7WO9WDOIE6IAE5K5DFEK59SEKYVLHSYT16BF7FHIU81SPGJV5P90S10GW8HEWXUACVJ5XEK32LQ2T77SBCVWFSPCQFHQPUPW5IZONOVC733KC67GZ225T1Q9VOKKAO7YY29G85EE5DH2UBB57VCZB3HSD6GYWH3GJ5YX63L0ENERWL0OQAT7TWI778HP4Z0CD5SXNVXR7S400RHYTUDK4TOL1NZW4PWMSYWKF3KE2DX9WUHHEEQ8O22WW1LRDLJU9Y2APVJJN04RU6CM4QZFDPW7FALXA7ALVHJ5O3SQZH10FLEQS9JC98X7Z1BFV4KPOJJZGUS9Z0SSWKIHKVZPB6IK3KF1T5K7NQPPXAL37J0U8MGBJLFQB18PY8ZPOEWL8YRDTLZKFRG7ATSDQ2TO50OS99FG2C9PFQH0Y1US7KHXI43R39VLU38KMTHH1T45RP6ZSWFN3FBKW4J0XG8L4KASEFRVM3WSBA6AV7XYNUOOLG60JZWLEWTGWP1XOE00TES5BXVYC8L4I0ZO4SJ6XECKGQLW8QSC0XP6XMGWJY9OQLWCLPK65IOD8MQ27OX877GUB91ECT5S9Z3E8C22PVRTILEJ0DTHM1V6WYM07SXMPRFIQUHO84 on node1 (cluster.py:3602, query) 2026-04-30 17:28:05 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'7NMF1XLVB7VWM3DVPR6K90TWQ21FA6J3R6KLYJRUYCSSOD6ZQARMBSA67QYBAP4LOKR8Z8RN37YD1M746P9TVQUR02A471WGX49S7350BPS01NV5KY0Y3SHANUOCL5YLT1IT797IFKYA4HHIGSOOE9ATASVUFZ4SDG3G6ML6E4BW',[262, 228, 552, 907, 143, 363, 42, 967, 725, 516, 933, 452, 295, 197, 282, 195, 960, 980, 179, 612, 629, 600, 640, 263, 832, 694, 721, 903, 866, 640, 74, 871, 388, 755, 415, 660, 279, 391, 272, 948, 348, 618, 662, 652, 551, 504, 956, 222, 426, 374, 527, 482, 220, 295, 971, 886, 691, 335, 621, 362, 435, 570, 154, 169, 357, 239, 443, 461, 417, 924, 606, 872, 895, 745, 74, 0, 545, 146, 926, 339, 961, 748, 499, 569, 428, 268, 217, 634, 997, 421, 539, 767, 876, 300, 535, 897, 159, 660, 342, 808, 431, 264, 30, 291, 364, 268, 116, 651, 430, 355, 316, 669, 201, 479, 498, 292, 866, 863, 502, 213, 189, 767, 96, 844, 469, 470, 463, 821, 421, 294, 489, 408, 423, 596, 498, 600, 972, 241, 827, 823, 116, 820, 649, 145, 275, 611, 216, 847, 802, 387, 745, 892, 864, 615, 655, 188 on node2 (cluster.py:3602, query) 2026-04-30 17:28:07 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'7AYNF243QGVFIFUFNNHNY2NX9E3TX0EKL8ORQX326PSVRAEADN5B23OJPW3S22XGVYVIUVG6VAKHPYACGTTJLWHK3SJOW751UYXEOV845KWWT3DT3JO36YTQTGFAZ8PJSCFXCZ2W4AGP3NX6LFETI2XHC5MG25D4B201X3QX60ZEQ0XYDOAV83OF0TFJM1LEM983N99K3DAI0CWSLJOVM5S5RPFSTDQ6M9SC28SL4L5142VBT2I65XVSDKGB4HW3HVKKNWUJIBB5TIDZGKVDP2R9R1NBV5S937WS9ZW3H5S5LVDLVU0ANHGGVXFMZYPO9N8G7G72DMM1H82IHX298K2TO0F1ZP2FS4GTRGD4Q8POXBYQVOD77JSKIZSZGA18PRTGRDYZIZIJPDRNQNZNYDIOE6I0P10Y7BEGBMITWEULRL1LEJRDI05Z6',[695, 453, 420, 219, 835, 383, 585, 712, 573, 758, 751, 865, 161, 796, 177, 330, 508, 347, 597, 329, 368, 590, 571, 414, 797, 979, 636, 161, 317, 314, 376, 626, 129, 81, 465, 594, 934, 154, 385, 873, 887, 138, 9, 283, 882, 645, 921, 699, 63, 620, 982, 54, 523, 211, 343, 107, 950, 566, 129, 175, 427, 999, 344, 995, 828, 231, 343, 97, 527, 876, 731, 359, 390, 371, 536, 839, 328, 819, 613, 484, 759, 787, 39, 357, 959, 80, 43, 482, 557, 627, 905, 689, 21, 998, 701, 332, 341, 630, 450, 800, 701, 737, 51 on node1 (cluster.py:3602, query) 2026-04-30 17:28:12 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'KIDFSQQNPHJDNPOI06N9P3T6400E3Y6VQANB7LI3T12DMLQFYH41KAT51FDTS4WNREIW7Z3YN0SVP6T0QIEUTUTF26N0DML0OKOJEMM3UV0XDHAA66OS86DDD2MYBQUFXXTF4UWSXM1QV65BL10UR03IE6FNAY185I6LRKSM1HJAB8ZQ9NMUJJ6UFAVACAAX92NRPNFQX1EJ93UL68Y5ZIEN7R4Z5HDF6XXJEDRNJH9DHHM4CAL5FZ8XLT8MSJNKT5QH4D3PWUV88CKXMZKUA8EE7368CCTHWHWQIL7MH2WGSHVLKSMTN6R4IVQLD12U9UDKJOA5GSFLLJTIA3IMPOL5OTE3N006J5MW81LHDDEW6MD19ANU0CDISKRD5TIPLBTY2X0C4L8YUD48AW0WDWJ10PI4IJCFJWS50HM6RGVAEXOKGCAFVB0VC5CBRNOTBE12MSHUD13S4VC6G60E1QZP8Z7Q9B2AYMQYVMUH7SJOQA09WUOLT478AM31EQS186GW1X59IGXLH15C0FLNF',[974, 110, 293, 960, 57, 407, 758, 925, 557, 344, 358, 934, 330, 604, 655, 854, 987, 995, 27, 553, 246, 765, 152, 59, 36, 178, 379, 779, 693, 71, 85, 113, 313, 726, 195, 464, 115, 756, 82, 497, 497, 852, 558, 691, 878, 573, 373, 146, 380, 983, 785, 904, 788, 985, 569, 395, 154, 939, 735, 319, 948, 505, 488, 276, 511, 118, 713, 993, 946, 352, 890]),('2019-10-11',1,'EC3RHVD8WCBCRS3KL9ZROKL6SP33Y7TBJ7HTPPFBE56G on node2 (cluster.py:3602, query) 2026-04-30 17:28:16 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'XMHNKO7E5BQD4H9BIB405QXJA092QKK2CUAQNS3RJFP8K4QN9BDCUEAY3MP0N9I181EHNWESWS8DTRH3DQ6X9TPWFLZ6P5GEBA3ND1U2JNN5E5GCEPWFKSO6IQH344UR6TEFUYB4LAMEFQ7P9P371PD0WGI5M7V2J3CHGV0ZIFJ9OOHU2XQLFIQ1BDEL22LVTI5X9B71HL8LZXA9UQRSCXQFKZMUQN5JA7X8AVIH23HR52Y7UTXGU2JOB6221MAO7AWDCJZ9USNQ2MRBYH47CAV3WQM9XH',[686, 947, 482, 847, 890, 353, 888, 176, 56, 620, 790, 80, 826, 936, 85, 473, 325, 767, 764, 281, 385, 110, 6, 949, 948, 641, 619, 779, 917, 647, 373, 193, 316, 644, 275, 185, 575, 403, 584, 36, 621, 415, 441, 885, 688, 673, 257, 629, 687, 931, 819, 958, 799, 1, 665, 992, 388, 774, 19, 42, 446, 672, 935, 300, 354, 405, 367, 831, 509, 629, 574, 527, 485, 117, 611, 905, 88, 320, 555, 949, 924, 559, 452, 483, 768, 481, 557, 981, 404, 694, 822, 499, 647, 164, 238, 8, 612, 901, 609, 968, 927, 447, 403, 297, 673, 947, 860, 283, 198, 637, 815, 530, 915, 23, 53, 535, 75, 270, 644, 301, 393, 279, 881, 166, 618, 439, 506, 728, 152, 799, 918, 61, 21, 972, 635, on node1 (cluster.py:3602, query) 2026-04-30 17:28:18 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'0YI1SRIM7GQK5GNTZFYEPTTL6RF1LED48MUL8RP770RWGCRGB6GG70CF5DWCTC4TXTWVKB5F64F7ZAPOPZT0SYZZRXCY2T4DSBE8LQQM5HUN0HQ2M7QSUUS2Y0MSPHNWQ0YPWQ8QSF46XN274GDLEBS0UMI4XBU6QC5FWI0LEOZAJPR4G4JGZK82IRK37FGW3X62U3E0X9N9MAET7M3K77R482AQKIAB5HW36AXIGFJFMAQV7V6MVX1FNAHQ5KL2XGGLT0PKYMGZSH01WOM9LYOF9QUYRUZ53Z11OYOCQJTJ2IYA0PFRL08N8OI2PL5QG9CCRJL8ZJMQJOFBU08XAUO43FWH2BBBTSY7G73H9PK8PKZGXD613ONBOTQIOJ5G0VYAFHIQZ1ONIUFBEXCDDKPSU3I68S9JLMIDCSYCV8EBAHIL1RMV2EAFKIW3GWIB03LAQRJU2JI637RWVHI264Q6R806NTOR5QR60GW2AE6TUA1YZFWL0TRG0EF7AI8VDGVTG86QXTP3YIVGRYDHX7551S39NVBX0APOYDN37TWBU000D8BUP6AHP6EOO608OSINC41Y96JO3BS7XQLCEIKCH5QYE044X1O3OE2QBF7L0ZVKVKH633MNXWF7SMFSYC1V2NO0HQAZCT4CNG4UVXTY',[551, 949, 45, 344, 496, 536, 18, 57, 986, 210, 0, 859, 483, 268, 249, 768, 180, 820, 114, 347, 230, 409, 308, 818, 12, 376, 631, 664, 477, 185, 35]),('2019-10-11',1,'XLXIMYEZJ5NN0GSQYGPJKQ13IXG4FOKEBN336T5N2980BX07SI1N5MD0VM3MA3B64UU7F0OSGK85825YUOS13ADCV1OXE9L3ONLAB4DQ7Y7252PWP on node2 (cluster.py:3602, query) 2026-04-30 17:28:20 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'GHLG9XRYX0KLYW46XQF7TJIY7FZ4VAWP49DM7T0R4ZDHH4TB8GPGU3ZFSV4RSJXHWL70TFOO498R95W5XTL5VPGCRAE2MCGQQNZ5Q1OZNKYXV0UGS2SA4SITAXKK9LWXGOUKHQWXXU31ICD2X0U9UBQ3UFGPEXFX001P6WI73PF6M2ESQHQABBVQN3O1EN4ARBGKM9ZTV0II489OON3NVIV1B5D2XSLNB0U2G4SYAVXZWCJ3IJHMDY9X6C4N4OQI4V15295NFFOIRCIT79U0ZXXYN9H4VAKSODASZJTD3DKALS3ZBCV20AR973JHYKNMAGAA5975CZ6LKMOS97NI0FBJST2IGY9H6F44OTTRV7N8SWC6OWKLQTQ4UEZ3Q7F2M0VUZO93S1392Z0SKUV5QMOHRZKAHOUFGI7FVNLHEQPSVBVOOGFOPUYYDANE03VP8TD3TXPUR563FH5AV8RMK13MO8YNB9JDHGYEYIQTZX4IBO0H06WICD8EE3UU650DUZ6QG0R3JHNSB94VA7U51VTYIR2P5YFRR0PZVYCJ87N5D',[882, 421, 960, 830, 17, 818, 446, 232, 427, 18, 729, 582, 284, 460, 415, 920, 530, 382, 17, 99, 295, 983, 946, 943, 887, 953, 543, 77, 13, 904, 60, 243, 85, 620, 897, 346, 680, 523, 113, 856, 183, 369, 509, 253, 511, 235, 190, 259, 411, 581, 700, 734, 580, 245, 366, 475, 261, 678, 86, 797, 485, 750, 652, 569, 674, 183, 983, 96, 89, 173, 499, 636, 415, 516, 351, 643, 843, 232, 208, 62 on node1 (cluster.py:3602, query) 2026-04-30 17:28:22 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'PHCML357DJ2TDBUGZ95XAUIY5N6ZIIL1AY9M4YVVDXK2YIPCIYL5WDF9FQN2SLGQTS1R0NXM7ZHMR80W0TTUKOESYAFMT178SPWP9G9LWIJHUXLZ7UPK888AS2Q011739ICUZOWE7GUSZ8PX7PR2RC9B2WPQGW6K9SH76DFRTLKPWAJ5DH00R7NQ0PCYV4RC1MKNCNJHGUBB455B0ON8QSR3CNEZO3Z01YGYLKE1E6K7UEKG8G7ZR1CHZC8Z8NJ55OG2Q8624PN2G0TO5VWU1GQJ4WMMIJAE5IJ451VYKHBCGR44DIRC4T4VK2P4JVBOX0MB5JUKUJT1RZO4DA4SFR7P4IY8AMPP1V2HPEF153EWHXV7COHADPTAMSQB45BAJ4Z9VGM94I6G9TADJ1LP21BISJAXCGY29ET5P67EP3RA3EEOR7BW2T86HAJD76TEEQ3TCJHP8SU1UZHO7LQDECOFOMNFNUZ4VB5MOWBA07W201JCLA1DXGC86DY48DTMWKGGY32S6ZZCXSFO662MX3OOQQNH0N9RHZC2SNSJKE2',[451, 607, 645, 35, 396, 483, 481, 60, 717, 693, 850, 625, 90, 525, 705, 83, 205, 504, 595, 426, 520, 490, 104, 18, 317, 317, 929, 804, 90, 88, 878, 336, 33, 921, 372, 676, 853, 66, 187, 226, 462, 659, 955, 796, 517, 891, 13, 684, 969, 423, 747, 42, 494, 661, 438, 837, 186, 834, 855, 311, 784, 719, 655, 393, 191, 774, 540, 823, 310, 164, 202, 36, 699, 926, 754, 743, 517, 331, 63, 373, 4 on node2 (cluster.py:3602, query) 2026-04-30 17:28:27 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'VGX24NIYGWJ5DC15DLMK9NMLGIFYER73HA077QJLGQ1UGCKPFVOX8A30IJE2JKC35Q3KK58TGT14Z44V5ORKMETUQC1TG420VRNDJMQE5V48YA4J8PI8UD5WRXBTFL6ZU75E2LE3MZU3Y53F3EY47LE9U44MGXKAYVN4D8W4CJTSBNXMCTHP2A0C03A5I59O1Q3WITJ087GEB2I9SV6HCWD2VNKIJ1ZB63JKJVQWBGHJG3QTJ6A1H8NK3QGFT3NK8G2H2BG33S6RZ8JTXBVKIO0K9CSHZDD5QQH0TV27DHX25SLNX8IPQN3I7UZKTU22HTRSLY2S576S1IKG85HF5X19NRIE8PTN2M9VOPZQL7CZWHZE6C8SPO0JGT86ZNCAA8ZASOYF8GY9OO9B79J0WG3QC0KTY0PY010DBNB0PYDYYO7A83AWBBM85NQNBN4C4RPLX6465GGDLM1DX9MJ5EHM5SVLZQ7L34A4EYWR6OECPHP4K71JAA9JB82078GM17Z32X3MW0PBR51YVCSG3QCTC5B10ABBO0BSJHQX9ASMMWOO7FYS2LTGKNULU41333ARN80MJDKW1ZW948D79FCEM9R5I6ZZPWVRTWC13BMZ2L3ZGBGQCBZK8AT3WD4T05771PEDCS1BIV6H3GX7OG3G7LIH8YZQT3S3DFTJRKZ7KKS3PR699CC5K04V9C6',[684, 506, 199, 432, 400, 24, 22, 763, 106, 764, 587, 287, 995, 755, 495, 392, 919, 933, 449, 496, 216, 814, 177, 619, 174, 736, 750, 26, 803, 931, 153, 291, 387, 604, 568, 94, 141, 406, 472, 445, 637, 90, 592, 638, 180, 800, 627, 674, 561, on node1 (cluster.py:3602, query) 2026-04-30 17:28:34 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'4QO9YR4AY4SSFRHROK4I7X3H2XBCW36DIEAUX0GT2CINTZNNQQCTPGNK4O7BFNYPPQ6EV7PK1HEGXYFTBKWRAH49Q5I12802NA04MOAPLY3Z45B4U8RIBTQHBVU1BH87L25FBT62H1JS8AIERGEGL10KTWE7RKSJ7H981KZF3X3AW4B0FX18N7TF6NBM22NDSCGEYQL1CMNQOYUE31HEIHPFQT6CHSLGYIHVRV80A949D9T5ZD3D6VVZ82VRNGKRLBB06YFTY2DGWTJH0G11BEHQHUTLEHLO79SBUFT45JXMTJYGU7XE2Q7VFV6EVJUAQV5OOTRH805TEY9BTBPET131VUEIVGQW6QLPACBNBKJHYKNV1OQWW3MTF8XY4ZWOYH3QO859AYJEQHY59K7CGBOZQJTJL056QE5CEE691ZC4KNL8O8SMQADDLZEF6XZLHFM3A0Q5UGA2L8A6HPQB1I3I8A6QXO5MHHCS7JC0Y36LEUY5OV9DIQ2AM1AJQ6GSQ1UJTS7CEUBF1BERYI54ZX8285EA0TY2QSKRWWENXCRK7LUY2XZR9QZJV29FU7GCY5EAGJTIY3TZ42AWJ0275NFAZBYAH9MENE23NT4N8PGGFTINOCKJXAEH2CPEB1RBKR7JUSXLNLJRTHD5NNFVSQV9FDPAQQEBNC36DZZSSJGICHJQA5ZX3VCWNFHNCEDXFV82O7QG3ND57G5YU3ECVNXMNWLTJW96LSEK1YINAQXCX0FKMMKO5YWU5RHBGMMHZMN2C8DS5T4VPZZIM8PC6KOZUU4GS13EOU048MFQ92DQUJS8QKP9GAUIG4OSSRMB2QCXR5LZQU2RHZD9',[232, 234, 287, 346, 864, 467, 652, 52, 580, 836, 729, 154, 122, 317, 49, 339, 828, 861, 591, 56 on node2 (cluster.py:3602, query) 2026-04-30 17:28:36 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'',[935, 683, 199, 985, 748, 428, 91, 482, 75, 987, 352, 765, 444, 450, 692, 829, 699, 919, 827, 127, 989, 534, 172, 876, 417, 447, 324, 257, 276, 546, 389, 563, 903, 927, 809, 737, 948, 38, 338, 499, 293, 87, 341, 64, 334, 907, 715, 137, 931, 590, 34, 296, 47, 666, 420, 378, 562, 821, 626, 946, 627, 575, 510, 481, 499, 92, 48, 868, 806, 451, 736, 844, 895, 3, 632, 117, 918, 522, 248, 114, 92, 377, 305, 417, 824, 795, 446, 173, 887, 909, 416, 560, 47, 520, 160, 946, 186, 319, 542, 796, 70, 946, 618, 880, 558, 649, 977, 772, 367, 94, 735, 542, 822, 469, 875, 937, 740, 157, 686, 329, 646, 427, 552, 331, 864, 427, 871, 889, 340, 390, 801, 458, 859, 141, 641, 629, 634, 859, 28, 492, 512, 831, 521, 172, 545, 429, 78, 875, 421, 98, 626, 807, 696, 12, 829, 255, 281, 616, 781, 558, 840, 877, 614, 611, 850, 196, 403, 621, 671, 375, 102, 81, 77, 814, 315, 734, 13, 839, 233, 561, 450, 573, 384, 382, 23, 926, 815, 139, 506, 701, 71, 115, 850, 12 on node1 (cluster.py:3602, query) 2026-04-30 17:28:41 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'1YL9S3WFJ1P3L0WPM6AZ9MJUU36RWIXB9A0JEP6DV95F42D6RL6V993PG6PDUGJMLN2KOP1F377V1H24MB8COD4YT575ZT8W4ZQ81A3UMDU2IKO1LOD41PPHQRGCQNSYI3ZENROU5O60MPP18NUVYOJH488C3L8U3N7E1Y5Z7WKYI8IXHC09QN6S0APN523C0RR38U0O2LW49ARMEDREGNWHSAHMYCKYWF242IT7PS1W57B1CJOHT6OSQTGZQA3929H37QW2WX0HNJ64BMFQEEAJ8SEOEOAQY6OXJ92BLRCRIQJZS2ESXPCNO0LDWYHVKJK7ISM35LGCZT1QEHJNHE1LSJKPCK32MWZT7PJQ2IZTMEJTZ5FQN01UFQM5K9EDH1GKGUS8QYWIFBK704QLZ0G7UBHJ2Z0ORRQT5H8A0L7THM6N1XGJHB1K49XHFBPNGU5PU85XOPQRS9ZDGGXKDRRHZFXHK9O57K2DQKZIY5O0QPV5JUOIMJGX4JO6FFOKXQIVJ7ULEGXPHO3JU9ZAUZMENJRFSA2JINR9ZZK97J1QCVVCVOUKGQ1KTX14KNL72WY9I05JRC5L19Y8M9DJYZRP7ZQ6O8M8NMAD6E882O0ZCLTJ1',[879, 909, 920, 7, 648, 403, 646, 756, 559, 57, 537, 192, 273, 398, 813, 288, 26, 890, 758, 627, 3, 562, 800, 631, 897, 586, 751, 337, 314, 203, 726, 690, 689, 560, 300, 158, 968, 662, 70, 512, 46, 351, 605, 211, 949, 799, 972, 497, 532, 727, 987, 530, 663, 972, 916, 178, 808, 998, 933, 787, 665, 895, 628, 564, 872, 1 on node2 (cluster.py:3602, query) 2026-04-30 17:28:44 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'UIAUU4S5Q0V1WCR3URTM9A1DCI9FV5YMS3LFZCQIUHE8V2TV4FYH5GPHQG38HPDBPUX9KUBE09BFO71E20P8WS01848JQV2N1J6HCJSPDY2L3VMM8IQL0K4GNMQB8XXLV389FPAJO2WI3GJK0B6F8S4BN9701MC5GUP26MLZX4JXBWCBPY6378P0Z4LZTX7G6IDN99J75M5X3FIJZLCQ2IAWDGL0EFOXA4LN7FKEUS1V3QJ96NJNUN4MQU71V1YJRCZT3BTJZ9WIJ64R5Q46N5CXUDA3UDHGXW4HUZARPFSEC5IBZWR7BSENL1NJ5JDS9MCKYEEC3I3DANM24Y1AYS8EF3AJWFWPJCGNW8JI1G02LFWK70KCXNYG2JW7URHHYONXD0RR03H2QMYX5HY7UV1UZO208PSMOYS1MJ3CIY1DR1Y3AN5BGEG0S99FWQQH2N80GHRMEKTQRLYRXHU7NPZSLQRB9ELGQY4JNVG3JB5NUD9CAILP3W3BXB314IG3VCW47Z3DTP9UIBS1DRPVNUZUDZ6Z65OF8X70ABQX0NMMZGBWLAQENO4FAG3PGI37WE7EI9QSIFXQC1RGM76HK52JV6SO64RRQF9YAHZQE4C8UCPJIANWOXI6C2UY67WA7MORO5B18N87HG38YME4S1RU9MRDUB3U1K5Q2R0YS3RJ94YL4RRSR7V14E9HCUABR2EINYERMR6MMK79FYTHW3W7P1R0A68OMNKNIUUQF82SJI33VUH3VLD3KZB64HRFM9TDHBYWEK43YFGTZDEBJIL1CWSPULPXMH1PY840C9A25VSKVLZEL12LSVPVYFF09RY1ZA6X3VJZ6K6DI1J1',[746, 248, 89, 632, 690, 704, 917, 907, 358, 624, 549, 69, 571, 407, 796, 721, 700, 361, 590 on node1 (cluster.py:3602, query) 2026-04-30 17:28:50 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'P4ZUNWHAGKACSNBMVAMAJ240UINTQEHWJ1LA2NSAB1WGYPB2AVLSCUIG875ITDQULKOX2DRRK2X3OFFIDIC8CJ8ZCDO55PU2QLJSV8454DYY9RR1H95LP4Z9DYI6CGVKNUME6FT53GP2T8C0XMDXJX0N1V411NKA6HSLF93DLGOG8NM0N805KZGDJVDPSPRH1ZP5HCW003IB9GOGC6YGEWQMG3ZAW9VJ2L7OAINXT887RL132I45P745QIWW8PL2XKDIEDPCHZK910F4Z',[614, 398, 425, 211, 429, 72, 159, 783, 722, 137, 734, 954, 156, 624, 766, 392, 793, 749, 493, 994, 253, 901, 119, 380, 8, 725, 84, 318, 682, 601, 760, 98, 765, 57, 15, 898, 944, 57, 978, 116, 282, 181, 810, 996, 902, 720, 691, 220, 523, 852, 459, 503, 38, 973, 682, 884, 490, 10, 362, 868, 62, 473, 488, 248, 564, 553, 582, 816, 524, 920, 461, 351, 889, 834, 284, 126, 649, 614, 602, 765, 67, 241, 859, 64, 128, 785, 441, 670, 277, 408, 763, 469, 247, 165, 463, 962, 308, 56, 159, 943, 504, 149, 850, 489, 285, 591, 294, 536, 307, 498, 60, 208, 568, 292, 279, 702, 670, 531, 413, 323, 92, 806, 822, 412, 235, 1, 580, 28, 129, 43, 851, 199, 0, 534, 678, 591, 469, 649, 64 on node2 (cluster.py:3602, query) 2026-04-30 17:28:54 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'BHTF60HE077M4Y4SNZAJIDMY843N220WXHAXQEIL5KJT86J42LTZ9L6O0WVF149WBF8MI8G44X3GZMDLYMUYJGG7WN6ZXQML0PGYJJUY6Y5B3EWH6UNP0VS5JKQCYU57OKQYTJ25ABHKLHJHLNAZFF203O3FNE5BD5016C9W8PFRPNUKDMC7O0O9MWQ0V6DC3W5UX4I6R1BEDRV6UXWEUX01GLUJXQJ6GHIRF7W63JNFQEZ9NO6U1RRKRPOCRH1AG4HBAUVZ8YJTGH74TDU1FKX0KXLBE0HHHP8GZZNSTL6TPUFPR67115RWOY3FZ40M5SHDKB2GQW3XTHAK73RMWOEP21BWQOETFUHY66PE1L8ODR04CTDNLK8481RBPB7S82R9N2IKNNHIJEO8MGRUBRXOBZ73ANC88CV76CPCMANEBRDSPM8QF1G9TGMMEOM2LP8C5V39N0H6W38MF52AUTFY26XPBT9FRXUCGPVVB8NZ7FLDWH4GF7N4QP8HROEYCBLYAUZ5ZROS85NEJ3UJIO1ZIPZTOMUTSI3EGKUZ85G39AZQU4L8KUWNDBFXCJL2663HEMOBRPXEK9V997P7DL4MIACRGL28XJ6CDALA6MMKBLPDMSU93HPCFB3LL9J7S9FDLWVW3NP66AE3U4HKS5DQYCH419O8AEVQZMPCEKLQ33GP10W0QH08MJH6EU27OYCNAEPZ0IFC2CY2KRE1M4NC8I20T82DGPYISTIIO7BVZER0V0LX1LCT5MZVVY8ISGTERZPXY2QFPEXZI86EI5FB00U0MD7NI1YEK476CSCDA0E565IDGULL53Q39JJGNE95EDMC9R5SVQ5ZZRYWXUUDGTFJ',[610, 616, 752, 85, 91, 993, 571, 495, 181, 44, 638, 341, 928, 575, 261, 219, 385, 9 on node1 (cluster.py:3602, query) 2026-04-30 17:29:04 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'L5ENGDSNVVPRT4C51GPKOWBJ23W4E1FYB8P0U0UEPC3CPWHR9QKOFC60WQBUQZSH47JMS5DOWIHZ65H2SUMCVEJJIBOPBI84HSKJV9ID87U7LDJ60XC407G8G3X58O6EX1Y0ET0KEM4LNCSPGNOL0VALMD',[798, 928, 879, 787, 97, 710, 651, 787, 746, 402, 47, 249, 65, 66, 25, 35, 529, 449, 338, 107, 601, 446, 361, 151, 522, 0, 746, 465, 762, 255, 45, 373, 945, 796, 190, 892, 428, 938, 257, 478, 105, 548, 809, 197, 186, 947, 383, 268, 174]),('2019-10-11',1,'8YRY5DLZMSYDS6PRHLTR6P9JH5EXVL8M3395EABELOKUBA94ACSFTHSIWJ7Q4U1SUYTOXU282P2PK5BLJ103SKZNO7DFXDE3VGFDIFMSQMXB8EHO2Q7I6QHC9NSJTBFWN376X8R7953DCW81HS1KCMQS4T0R4NTUS5LX7DTQPTPG5OYEX6NDYOS2OQHN32ORKL89RZQ3UZH0JEM47RUC0X3BC6LY63F58CN9F6BS4FOOM07WOJCTEJH3IXQOK9JYZWI36CJVWXEY222LLJY0J0MAKV1SDMM72YB8B5PP094LMG3V7BI8DMSRBHUGCFMO987IA6CK1OALXO9I41WODHRUTFAV3QWUFACEPTVWK33XM7H8FFS833A0QMM2HV1E2MIGE5QGPZJDS2S9GUIEEDT3MOMV4OV1QWN57IL4B3A9RW4YHHS9LSVX4EWLIWYSKUJZIB89PTFMB2RI1MZY9MYVCL4ALPRAFWMEC2XYZW63YGY25E5NFVSIL7TW7WZ36FVRLZH9UUJZSAVXBCW2K0K on node2 (cluster.py:3602, query) 2026-04-30 17:29:12 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'GYZDIKE1ANEOVKCZSLYKUTTDL522OE7D25RA0RQI6I0D8821QQZSSKRNWLD0HSBSC0ADO3R8XN28IZSJT2PPISHXYOYMCUQ3RMAIQJEXIUXA03F647BQKLNE2WIDMLYY9KF7H4WEDGO38CJ944KHP21EQ3Q7NQUWEPLEX0PZHHSP2LG9L3I0ZOFAAVLXX0R81MNB3FYGHOYE472OZBNS8VDLPJT2OG05RYVQUO7W19HSZ2IABC4LO74J2YTADPBT0QTRUJRX4K6YU5UE7DNP3JM9IQKQZ96K5JTASZJ8ZCRCZR0F47A6IU5K9VF9S1M9MM5T5G6DOEP36UQN9EO9ZKUBE1C1JGMVIESOPHGKBN0K7II1KW1D8M767DS70E2K22C0JQ2ACNZD9QQYDKSR98REHA9OCDSFJJLZK5WEJA3XH7LGF8HSNQMTXL90R7JNQ5PHY2QQO15F8F7RI5MI4TR6P6U3Z21JQR4C66P6RKD6WKV5KONS9KFBZQXK8RVGMENBA6SWZ7W3MR8V2C7GXCUE2UWVZ0I33GKB17QMOB4QG07O6N82AWJNMHH27WEHGSL0C0U605PQDW4GVG7FADVT2G6XS1I4GCO61PY3DRYTXV9EOK7RGDMWRT6V49JM6KELP8FS7YE5665G9MD5HEW3BOSSQY0ZIQRFJO9RY1NNK9W1LANZBHW2X7J0HR4EBB5BVSZA13611M9WVT1RBQZMR9FSO5J4K3IW0YODBPRBIB61H8Y0WIP7Z3VL05GQNYRS82L6J7Y199DYKL7UQ5ONGYAJ01L8RE1IZKGY9HJVVP9ZR6GPFX6Z6O5Z7ST7K870DAFPG',[77, 983, 577, 213, 786, 652, 507, 500, 814, 519, 359, 265, 451, 377, 269, 142, 960, 632, 40, 0, 459, 6 on node1 (cluster.py:3602, query) 2026-04-30 17:29:16 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'X3HYD6P3ES7C0PBNBCOPF05CXV44TEAOV19D9BR9HVSKZVQ3IW1SB6ZGOSEQPRIGVGVARK32TZQCO4RJB9IMJ67FAEQ37BZFAJHIAMLRWEWSVUILDDYXOD1UP5G3IBFJ7RTA9ET6W5HX863D658IC42KG32VSPH2E6P4L2KV7J8QSBC8',[178, 495, 949, 288, 859, 72, 852, 699, 906, 302, 537, 865, 337, 722, 416, 497, 498, 555, 842, 15, 434, 77, 674, 944, 266, 117, 933, 393, 226, 953, 892, 682, 851, 959, 811, 761, 664, 902, 87, 475, 240, 694, 796, 320, 239, 900, 518, 116, 505, 223, 147, 830, 262, 871, 870, 514, 16, 988, 217, 633, 228, 681, 980, 470, 259, 344, 264, 330, 785, 11, 737, 59, 213, 496, 769, 521, 562, 418, 545, 544, 918, 738, 243, 78, 729, 930, 520, 510, 863, 788, 792, 920, 294, 826, 579, 752, 712, 8, 922, 911, 929, 470, 889, 300, 74, 886, 306, 533, 726, 422, 712, 211, 361, 57, 85, 123, 475, 499, 187, 164, 438, 734, 908, 837, 85, 25, 172, 614, 780, 87, 996, 97, 184, 888, 394, 794, 803, 724, 562, 467, 627, 512, 1, 141, 531, 394, 957, 844, 713, 53, 470, 96, 692, 240, 238, 902, 92, 785, on node2 (cluster.py:3602, query) 2026-04-30 17:29:22 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'VCCNCOVGREUL3JYM3YTC6YP06WF4DMLWL644G551NKOW6P5Z2XZ6F2WCVC85JUI9DDNH3YLS9N79TR0D93TSZYTBMMEFQIZM8UVHN1XIXT9TVSICJ4BNOW86QVSWVPMCVJGZVUI5EH7P5363O6B2FC085E84NYHBBKOD2J7YEM1IJG0F773NCPKL2P',[618, 896, 434, 821, 658, 839, 12, 761, 391, 668, 327, 85, 136, 41, 643, 883, 156, 46, 455, 314, 570, 710, 811, 851, 880, 462, 616, 348, 719, 456, 225, 503, 979, 860, 142, 295, 842, 64, 1, 139, 302, 57, 530, 534, 679, 53, 998, 782, 764, 970, 898, 963, 523, 763, 235, 235, 897, 97, 150, 569, 153, 702, 257, 430, 773, 798, 79, 795, 594, 91, 545, 216, 838, 290, 169, 307, 717, 97, 723, 761, 253, 335, 556, 654, 358, 894, 841, 413, 793, 605, 715, 615, 396, 366, 872, 60, 442, 529, 422, 118, 581, 710, 26, 300, 23, 229, 488, 261, 287, 972, 881, 567, 557, 796, 795, 973, 596, 357, 268, 125, 968, 108, 834, 32, 611, 326, 139, 496, 596, 903, 463, 872, 603, 121, 769, 747, 889, 803, 563, 904, 57, 108, 495, 311, 148, 135, 489, 451, 934, 219, 870, 740, 181, 217, 944, on node1 (cluster.py:3602, query) 2026-04-30 17:29:28 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'1S4IPI2NC75OKY9G6BM9EIHW7ZX5RD1JZXTJ8VQT8AV0UHK4RW8P6LNEK1L4OWSS45DPO1428LNHW4C33IVEBZ6JXHZVP4419GBP8KKYVD3CLWETQS1T2CWOICZOXK8DT7C1HVLXM61V3KUG8JUAYM7C84PKFMRM2LULH4QH8YA7D5I27XV1ZIRMLB89NQSJ9PUJP92LJTMDBFW559TP7VOTW1WNQX3688ESYX5Y7G0O7LDMH6JUCPZ6X4LVN9N44T8RMHUZMEGRK2HFCFK8IK8HEA8PG0XVCWYL8YJCYRDHBUZVFPW7I49FQ4804S57SXTSA31TNAQFS946CGVMGEUK48TIJBRLMSKX0Z43YPV55X54TVOFT5DT8MDCDVLVOBEQNAU8MN8XGA5HZB1UX627KUPKU',[659, 269, 866, 357, 364, 278, 623, 266, 0, 719, 288, 296, 588, 17, 19, 855, 155, 41, 123, 250, 635, 635, 503, 831, 339, 0, 349, 28, 584, 666, 196, 624, 471, 749, 301, 694, 452, 778, 517, 270, 639, 858, 713, 64, 117, 7, 482, 635, 593, 180, 728, 764, 449, 484, 647, 601, 81, 862, 352, 980, 334, 735, 538, 180, 177, 583, 973, 940, 842, 386, 235, 654, 131, 695, 4, 194, 239, 566, 867, 762, 301, 717, 922, 95, 983, 877, 690, 219, 888, 79, 999, 658, 160, 252, 685, 13, 492, 882, 9, 795, 918, 134, 542, 735, 217, 512, 312, 839, 630, 491, on node2 (cluster.py:3602, query) 2026-04-30 17:29:33 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'LZRPPHCBK3CZ5A71G4JPXX31W6LR182BEPBVTB4MJI6TXFCHSCWV5YPA248ETARCQWNMRJS9XLYM6O6R5MK4Z08D7IVECTWUXCCM33FLC4FC4I779EJ3DZ1JBNE20GH1LU9WDHFSOKEYPA1FSY4MEHSJ4OXJC3GKB2MRANTY9Q4D82TG1FYVBPCP18ZWA7A5XI1RXHU9X6IQKLY4PYLLPU0VWEC8EISGRNO8T8MN2KT59F638HE7LAQLMG3WYS81IEEDFIFJ83YW1BQNVIW43P43D97EJG3PKZD0SR1DSVC0PIKKZXAP33JMOKRGHVMI940W9E0BR3VUNAWXDY64ZHRSIU3ID6VK8Y1D29AOLOJ06JPW',[928, 471, 284, 488, 298, 966, 397, 131, 966, 729, 813, 463, 902, 210, 679, 205, 840, 355, 295, 400, 23, 562, 411, 295, 977, 81, 701, 196, 724, 470, 671, 573, 89, 50, 422, 0, 321, 402, 456, 846, 662, 345, 83, 595, 233, 89, 153, 0, 249, 512, 923, 782, 132, 156, 197, 870, 41, 436, 524, 113, 956, 635, 922, 466, 702, 887, 435, 993, 246, 774, 673, 687, 569, 766, 420, 14, 514, 858, 710, 726, 796, 861, 536, 482, 685, 97, 395, 554, 522, 19, 556, 702, 162, 796, 858, 597, 950, 324, 298, 989, 889, 919, 786, 876, 682, 592, 866, 383, 215, 769, 234, 930, 344, 281, 972, 152, 427, 240, on node1 (cluster.py:3602, query) 2026-04-30 17:29:41 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'IO2NCSRSFBUC2SKIM7RHE9S1RZ7AF6DZSRRRSEE0P4V4V26LWSUBJ73NVZCWA63ABQC4HCZQ33ZPSI2L0V6ID9L2EBGVIK1B4RA5NW7NTFIEHHSXPBADXPXX81RBLYDQE3QT0FSHJDAL6FG3Q7ZYTUCQK0CVEJ5HGW95N6M94PNPIICLYVBZ8FBUO3P97RID33XMM8OSNHLHUTDQCFPG02VTO8IW0OPECGYWVVSTGHYY7QREJP4EHXXOT803FK5WN9YAMJ4233KH1MWP8KA6GG7VM9P45HUH5YQAS7JPR4610ATY6DYKGLRNXDYEHYS8JL1F7KBCS1ERZVBYTH06YHP6U4G5OIWY47F1G1GUCBMBMC2JYPE37MS38XYBPRE0HLVDVMIP841Z1YN5OYUCU6L7R9PSLEPKHHXL0OZGR4F77D',[458, 378, 14, 79, 452, 260, 234, 983, 201, 234, 10, 847, 699, 208, 352, 483, 20, 392, 557, 185, 741, 967, 514, 120, 196, 464, 466, 175, 597, 962, 827, 34, 850, 391, 831, 94, 509, 155, 393, 81, 311, 533, 52, 269, 139, 607, 954, 654, 635, 177, 319, 712, 482, 768, 385, 170, 857, 254, 956, 901, 873, 172, 881, 705, 646, 44, 323, 881, 999, 511, 569, 960, 830, 527, 999, 217, 487, 138, 250, 183, 922, 386, 457, 241, 239, 257, 625, 639, 397, 691, 349, 169, 706, 272, 101, 174, 654, 886, 107, 332, 739, 331, 297, 954, 46 on node2 (cluster.py:3602, query) 2026-04-30 17:29:51 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'O6VIAOJ80F8JVP63RWIDVNSHIJFC30NFWR19GJDAROMIJSE8OQ73WKYFOM6HDORORYKG0TYOKM44C8UZTVCC26FN9H5H4O00TFM7792YA63LDFSJ7TJCVZR0240C4YJ68IH31THQRR41GVV477SNIZK985U44MYC1HDDMRB2IY0JVIRRMSFPCHIAW8DNCH9RNLX191GCDL1YZ8VFBSG9ZSMB3MW9CGPF9AXXEOYRVCPGIBYH8GF7AGL4EFNANW0O7WR40B2GCEKVK21OVUTMZK10GTRX40I7P0V56UYJZJZ2GMSK4IC82L1KKDG8HLR1CFD1NYPPXLB9DD7X02RGB5PHT2JJHQ1OZT8NC0MSMSWD4OJ4WBBT5BI9JVZNK3JOZWJ7QN7HU3KNB2ZKV',[287, 227, 899, 121, 194, 395, 75, 555, 629, 341, 618, 702, 740, 458, 584, 323, 43, 662, 701, 325, 963, 390, 109, 885, 954, 292, 881, 369, 795, 370, 541, 506, 164, 174, 450, 674, 207, 127, 576, 151, 17, 0, 686, 745, 124, 142, 535, 522, 555, 388, 211, 704, 684, 501, 606, 766, 724, 360, 821, 483, 309, 60, 600, 363, 665, 482, 563, 65, 782, 399, 134, 361, 29, 429, 398, 208, 133, 827, 731, 852, 909, 141, 962, 838, 110, 575, 18, 491, 704, 656, 211, 984, 264, 615, 425, 807, 341, 963, 934, 22, 764, 382, 360, 877, 371, 693, 387, 366, 820, 186, 28 on node1 (cluster.py:3602, query) 2026-04-30 17:30:04 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'VZ1GX942EXO2G6QX4FMNIWEPZNJ6RRJ57842ZQY18EZ1113BS5RD3VXYKLDGTDFSWJDA5S3YW065TTUL187EZXK9IXO5TQWQLD3ZPNZNRBMG8PNAVKA6HKJBGN9WK6BG6OKF65ZK7RANFY58NXQVHUGJ9QYEODRNGYFZYEL7TVA6S69T8JZXHIWXDM8K9H2I87VPZK7DVSLR1KGM0RV3YZG0GRVZXWIGXERCVM2TQ141B7WJ6F2VZ9FM8883LF49I305BIHYEMV31H90YY5L9VRZVZIG58PZQK3IU7GT339OXSFPMRA1PF6T9R5CYPO14AX79XBCHS27CW6FGOOOL2IG3AYBOLUN5VYZCQ0LW4EUODWSWST60Q92TZ1UJNU5OJO0X9Y8JMBV6VWADJOIAV5R6TZ8P9XJEEZ632SM5TGEGZF65GXVNC92U5DSP92LR5F689UR3FEEQ199UCYQ9WDWNBIWKO98W5U4EZ4WD1MOYRZI4OD4AVT1QZC9MJS45CVYD3QBDP7TH0NKP4Z0MERBXGCVNHBKYGIQ089ES41H0YIUGLOQPX33AI7IO0Y8Y6G89IQHZZ6RL9CEVVS1UUMQAQT1NA88HDNE3VTRLJ0ZC3B88KTFDGQQEMODBC8LLVEWI59SK7ZR0URCKS4A943PGYJ54FNX8I7V0BX2UXXYY9K4Q8OT4NAPLK63TBEKQRZ91RHCG82PV6YS55B6CIA6RSN6KUAOS79MJSZ5T0ZACI845MHNW5O1UB9LADMJ8LKXIZX50DKAG9CM3XURUHV9YMC1N22BU5WHLXVTSIOR7SQEBN7XK3C32ALOHIAMIZFZR74S64227P1SAOKRENONNGO9NY3B7NC9QGY9O82WRYSBEX1TSXHXOIEN5XUD2UXGN1401KUMEBKU7KL126AR1PXPLTQ0PEZ7XQPWNR84GB2UG8 on node2 (cluster.py:3602, query) 2026-04-30 17:30:11 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'IKQDAJZ0WKCOFUH808FQ4OAMA61KGO247SRTGH0C4TEAY276PKSWKKXD9WIPGOWK6Z1LZSRRA3N6ODR8PE4K4RPWCPRGEERH4NSO217TBHFDJJ8ZJ2MTU96BQRM36LKIC7GLXDNLERNBZUIB4XVRXYJL5MO7IY9EUOV2NXQ5GKQMRC2JEI223FHX9CE51BPOGN8IMVUTBGDJ68TVTTDZ51WTKSA01C2HH8PJ0DJFGIYH6CA2374C21HOD9L566J7GX0ELG0YT04Q1RJUNYRCLO0MVGI090E4XX46LW21WU',[921, 416, 123, 242, 976, 681, 413, 285, 0, 295, 918, 959, 890, 703, 463, 527, 17, 44, 337, 198, 473, 480, 36, 98]),('2019-10-11',1,'RL5543AV4FM4NHUO8OIAA20KC5T1GOYJ57QCK2MHCYRYYNBIV70V3KFCHZ6TXL8T9BPHW1WEUVWT5CAU8DJ9BFJ3JAFYTKGKNSVBYQZ02B78NVGLE6ONMW85BDQ04SAI7702H8IPLYHS1MBIP3Y5WCJ0M0AHU9F6SMP71MKBJ4QWQFTFO2ICQ44OF02D97ORIP2ZEVYYXY7O0C7QGXOE4BT8W2UQOITAO69BFCKJZ5SCYX40GAMDBA8CUEO07HDERR9LXP0NHZBBDVDONOHBXTE0DS223POPUHCCXFBMQEUONQQQTAOASCOFGN334WZQXQ5GMLEVZ3PHNBSPF5N8UX361XG8MDHKQ2RHLY7ZA4I66LPFYU8ZP41OGGCXZNV1MDSEVQUC43MAWOXCTTY5SJQJIPM1TXMY55N9BZQ4P5X354JILIWTZ8WJ45K5IEOGXUZH6PAHMCKEZJ5H3RL3CNMDE5ZPSP39SX7COJPZNXHN5YCK41MAK80WTZO8Y6N38 on node1 (cluster.py:3602, query) 2026-04-30 17:30:20 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'W2IY32080LNU95FQBI0XYI7WUBTKGKW3MMQL4O6A5E71F94CXWHFMGHF0EVF3343P9KYKDPMYLSLFG9KERSILOS1AIXCTDMA27EOCF8OC100ZHTKF806MDHBUWTFFTJ4LZQV6JLJRAQNST4E8BCPU2C7X6WLDFBLKORL048W1HBBVWSJEIDZE4VPF8',[189, 253, 433, 999, 785, 773, 113, 488, 881, 339, 581, 678, 192, 122, 976, 144, 433, 823, 431, 17, 343, 44, 226, 861, 437, 932, 323, 742, 974, 480, 888, 11, 452, 462, 289, 714, 423, 916, 445, 565, 411, 985, 416, 639, 95, 117, 876, 684, 533, 947, 207, 342, 729, 13, 144, 505, 737, 307, 391, 109, 680, 829, 285, 435, 694, 711, 0, 361, 456, 811, 961, 343, 573, 610, 7, 551, 662, 20, 435, 487, 67, 638, 850, 365, 414, 597, 218, 584, 577, 257, 687, 885, 724, 822, 399, 828, 802, 203, 660, 193, 707, 345, 522, 990, 735, 801, 265, 606, 102, 2, 869, 356, 916, 481, 805, 78, 721, 670, 115, 50, 747, 178, 561, 840, 353, 612, 919, 939, 516, 529, 809, 251, 874, 79, 752, 481, 659, 870, 914, 340, 703, 105, 830, 361, 772, 745, 501, 826, 774, 931, 574, 188, 282, 622, 852 on node2 (cluster.py:3602, query) 2026-04-30 17:30:27 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'4FPP9C2V46L7B8EBE1H8MX628B7OXZU0V6JVASKDA01YEWKBT5Q60NZ3YMI0SY65EB0X8F4MP9811G2CZYX598Z4B34QKDYKX6XKN0BXARI0KJ8XFQOK964QK8APGGLKNGS9LL0KPMVCZW7MGD8YKG4LRF0MLJI3WXORV0ID5DRF5UQ2VMBCW9FRZ5LMT2D76W3B0H3HN1HGIFRLWW9H29JRYHB3BWU0Z5HMSUF8O4MYL73M0PJX8CBYSIIL8DVRU6CRMJYTFFJYNOUN20NCM584R9QELJR7PJHHM1SBIEV3FVNTQ0SB45JKXYOBX0TDRV2ARV9HHM1NT27CIW1IJ3Q658H31CG9Z7V0FL50ZGOM8Y3YX66MKHQXYAXAZ1HS77HUFSVM2DQXDGXHOQVUXEZNK4P9AM1V1DU107XLMP',[697, 635, 333, 34, 302, 120, 153, 584, 788, 407, 167, 630, 257, 508, 347, 232, 316, 875, 934, 557, 125, 166, 237, 28, 526, 496, 788, 980, 865, 905, 291, 301, 707, 349, 534, 845, 302, 683, 258, 221, 316, 385, 120, 916, 433, 889, 50, 850, 623, 852, 970, 923, 348, 972, 424, 330, 361, 899, 381, 705, 30, 294, 950, 355, 452, 124, 79, 98, 276, 670, 856, 85, 687, 805, 331, 620, 397, 49, 897, 23, 420, 670, 634, 103, 582, 2, 825, 255, 184, 781, 429, 364, 126, 361, 740, 165, 104, 69, 54, 328, 350, 355, 924, 373, 349, 869, on node1 (cluster.py:3602, query) 2026-04-30 17:30:32 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'PKIN76ZV378WKETE5DVW1JFNMVABB0BFOSL05XDAHX8O2LJRABGLHBQ5C3FJV7AHR00C5MOEOFYGHXH10B97D2AYGFWHV02KS36EWAO58RJ2Q0H03VBMG7S972TAUYFDU15XPXAZ7Q6GUKL35O954DZQJBS61X8I0K7H7DZFDYVEFQN6ZLAO8IOXSCJ5CH0S3ZCUF2HDE0Y9POJ5KEATBFU7Y0UCXW64PLAKBWRNQMUY38G3XFUCFTK42W62TNHYD279VF1CGPFDK7DH8ZBINCVR5MT4ICCOQVBLXJMCLU9R3DRUUFUG7G059U6Z7NCOZGH58DRUMA6DYMU0AIBPR23N87TH6R20J0Y9UXV8SOZSD529MU0C3FNFWN72AITQ7VKY5LND1769S2FFTH03W42EQ8VMN9EZCIWR0CMHJM4V86W1W525T9XLYBS3QPBE6063XDVWJ5M3I1VRXOVGJIB7RL5N4V5IOU8CMQ9K3E8K8S59W5DKE36WKQ16R0F10DKPYIL22LBZODAKC2XO9VISNKH1IYVCYTBBW1CPDA0YYAYG8KKMMLLTYVW5H8HV9VY8AKUNO9ZM2H7E4O75GDT93ZMI32M3GG5HID658OOGRZZMI5SGE09HG5POITWSGC4D8PM7OL0QBQFY0M12SP8MQQDPHEC6AGPZWNQTZF4QYG47L2CHN644HLN39QJ3O2X9WRCNOZTT95ZLEQLZCKP8PGW79I5PK6PA1BIKN6BRP0EW5UP16OK99YWH2BMI04M2AUHGGPLARJMRX3J14MBJWZG42ZS6KWP5PRN9JG2NRUWJHSLLBVEGTS4UEYAQGLG5JQLQX52ZHPG7WVLWNBP90LOFDYRBT5RF1VBGCG',[815, 835, 127, 105, 281, 661, 47, 874, 348, 929, 950, 374, 417, 951, on node2 (cluster.py:3602, query) 2026-04-30 17:30:36 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'Y3ZO239DDSWKSJK1LC9Q1A7BTXS6I0WTFPDHH9G1TLS9L4DJJNFQHCE6EFXNR1GL6IESD1TJ6IFL2KQR5KUTAAKUORFA725YQPIHP7WMDDQ47YSU0AS2ZPNWYND3X556PA0ZWXS0II9XA3GLPQDXRQBY78BM8T4YAIXAB7I1CQHI1RXGHKHX6HADO1XMQ4E2ZY2229Q6C1S7LYR8RSU7E3G8KWEPAOCTRAP2U4R6PQ7GJDW12NITTYCRJ82GBVJL7NUIITXZHWXQESIWFIZI5U0LJ4KOXX3VV2487Y5MDFUH636PRSEZ9X0LUXHDF15O21W902FY5OKGHUY3UBJDVIPDP54MQM4N85LYB1P33DBDGSW9OZG0Y6WDZ0ICI5Y50FOX1PZCJD8YW1YRB8HV7WCIC9Y4ZG932RTWDHT0YOFAE5WEZCFLAGU60MCR03GYVEXZHK3E0IY67PB1UFXPZF7TXGJQ9IWZZGNB6U3VHQ65AAM3QK0PDKD7HL0Y534UZZDKW6H6BC1ZI7C2DL9WW1RB07IGW0FVELY4VUYR4L0TWFKSVKAPMGEDB1RKGQB2FRWQZ9YGBB75VKE9NO991JX6YL2X1R2VRCEFRVAQRVGCGM6L4TKYZH1FW3OQBJJNRIWMB4PT8HP7V5M4OJAY6SG4A3XVUKNENPHX6I5JOXDLR7976EFL76UB5J5BDJTGTXGLLJAB8WXK662Q3YW17YD49JQO3ZR0UTQOCM9Y1EVILAZACGC9L8SN409J2CP7PLC12UN13MLHXKHL3Z5BND5L1FEPOBM4E84JPCQCQE4GIM7V1SF52U1PKGS84SYB5W4EAJWOVHVTIZOL08GPDJYGVS3UPI58Q43WV6JZ',[135, 260, 176, 153, 719, 551, 210, 42, 378, 452, 261, 656, 164, 256, 35 on node1 (cluster.py:3602, query) 2026-04-30 17:30:44 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'I0XQ3B4SDMPHGL24EUM0RQV15MXWHM6ZOX8401E9A64Q2WOC8FWK1FNX4OCXFSD7I6FFJ1CUHB9AGJV9PFL8HP0NWK6OXNIXH576XMV7MBEQFCWJ185FCUODGLMPMTG2BZ11M6OKNZ7THRMQHM7WGV6RBWJQM7VB9Y8COAH3BQT1XDVPPTJ9ZZH13RUFMJDBYN54QMJT9TGV95AKB0JGD3DEJZAYF17UW5YU16OS1YXBMN30Y2ME0C2KINTU6TFBZPI8VQDWO0CBUBUUFVMLIRK0SZMJOQYRA9188SU7BHHY14PVSJLC7VZ3YF2FG1R5Q2M381VLJ2IRJN2I4L15IX9NCDPZMOFQPEX1KXY36W5RQBNNR1BGIO62CNCHE1MKM0GSIJYQLN890XEWDK5RWV0WPTM6DA75JABK038EDEVM7PBGHK3598CYTH2BSAVMULF5WEMJ6AAR01RR4C752CYZ05Q0PTCEO1L5OI8QC6MK9QNRSCLL8LNP301SCFI2QQPTNUNL9NZRBMK7YN9KBSL3F21415M56ADZC1MQOIX3REBN0JLJWORCEGC98EBREEKTKCM9KRJBUBBK5BPR9EIMKK94F5RI5U1J4RBNRMS7URBU5TOXLEEJ2AFLWYUXYGTNTZGEOP4XTWVBCDWECPYZ2S96DEEBZ6KENWCYZISJZ2A47NIHPOQABJX06QIJXJZX4OZDV205GBLXO3KPCED3IZAVGSSRCDRVBT7MWUZCS3JCEDR36Q7R4GF6BLASTH0DKH8NE2SC6Z7Y564DUOID91GTFXOHK1EBL4WCGJLBATH5402D9R5K6VSSC3FVYA4J5SC9RMW82FWUICT2HGL4YQLA1I8KSG8RD8LDTYP08NPEQJL07T9ASUPQQOLS57BOYGEB3Y42JF2KDANNNGLZBC19BUU2MNMLZ1AA85ZGR78NM6 on node2 (cluster.py:3602, query) 2026-04-30 17:30:47 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'7Z31A7RF2FIDAA1F09PXDL57VTAGEG2ASOTVO6QPOIGTJNPOB0NS3AOH7PZZECYC1EA2HPAITAA8GFDH4',[904, 831, 231, 690, 872, 138, 555, 94, 287, 794, 670, 755, 294, 129, 275, 189, 242, 129, 59, 407, 262, 725, 121, 556, 487, 69, 525, 844, 232, 999, 736, 111, 389, 621, 884, 798, 700, 758, 827, 485, 583, 278, 955, 642, 624, 539, 611, 877, 338, 984, 818, 433, 635, 306, 763, 223, 498, 535, 542, 704, 112, 447, 746, 740, 382, 82, 473, 683, 633, 956, 984, 801, 909, 271, 903, 985, 578, 868, 502, 403, 279, 493, 900, 834, 684, 17, 727, 402, 772, 964, 835, 598, 867, 298, 714, 322, 139, 755, 929, 975, 383, 501, 243, 479, 700, 502, 390, 667, 656, 586, 62, 636, 164, 501, 319, 945, 630, 322, 835, 769, 284, 852, 266, 112, 762, 139, 348, 478, 688, 315, 107, 20, 674, 866, 800, 409, 423, 406, 372, 812, 477, 754, 820, 189, 341, 809, 609, 740, 771, 803, 203, 901, 351, 624, 600, 616, 690, 325, 264, 324, 793, 964, 70, 688, 946, 632, 948, 54, 514, 622, 60, 425, 977, 629, 84 on node1 (cluster.py:3602, query) 2026-04-30 17:30:55 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'AMLF6JK1V4VGZREN0WSRM8YGFF9D4JA16LWD6UGXOP2OUHUBI3UAITGYJ63O5Z68K5V8N9B64ARJ58QFZYS3I1IDI245OYRK04AEWQW3CRF9DVX50I5DEVTVRQ4J85EDXTRY7ZEL5LXH1WYU4AH3O5SIF1MHJ687TOTJEIOXMM1EEHB9QMSFJZV2C03ZMBK7F8SNL8V1ZV4OOIN0XF60A3OUFZD6Y33XYRFJ353CSPPYXA4TCK1G5KQFNAN1KVP1EI9HJ3XYNTQBTAS1T2J3733RJUVLNOM2OXZ9YZJTG4D4H437T4530CMLYCIXTI5CO0MM6JTLJZXOTWOZBVEZZ7PQK5AAIOZ1FJCBQ1PZF5NL8N77Q3D9TAHRQ2X1',[412, 575, 321, 632, 262, 987, 896, 310, 909, 718, 483, 99, 127, 344, 697, 435, 442, 975, 872, 753, 832, 762, 780, 218, 585, 138, 946, 501, 632, 917, 947, 966, 319, 82, 185, 784, 510, 689, 411, 539, 781, 983, 969, 304, 395, 789, 354, 95, 302, 493, 19, 235, 175, 477, 343, 825, 874, 956, 661, 820, 775, 338, 425, 642, 799, 465, 141, 776, 29, 383, 361, 232, 927, 515, 138, 911, 859, 941, 581, 688, 351, 717, 664, 126, 617, 274, 146, 449, 133, 80, 605, 466, 269, 384, 344, 591, 632, 202, 666, 13, 376, 571, 544, 248, 22, 213, 89, 359, 575, 714, 928, 161, 338, 780, 12 on node2 (cluster.py:3602, query) 2026-04-30 17:30:58 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'IYST01AJJWR15HSBWQY2GSZ8RV4CAL8D4ZMI63W0IHXOQR5A2EEEFIN8C55SGPASA092PUYZA1TQKSPIBLTJUU0LA0EL88MUFM3PEKKJBP6CVEW1K7OFJU6VVSYAYSJEC2QOWV3F1GG85Z0K6NS95TFQ8O45Z2AOR5FIHEPY7BBWKOUFSNGXPDSNNCHQQ2LCNNGXWXX4ZZEHYJBB6UP9XZ5IL34F9MRBF2P8GI4XI7MZKHXK24OBVGT9QLH1ZW71JG94Y2UQHZVP2OXFF7OUIFITA9DM9G9QK2HLQ5WOVJ6QTZRQX3H7MPSA4TXRF04ZKF68SYVVF7TSGMGSQELL4V7AJA3QTP4T20JL2E8B15EDBP3AMPNOK7CI17IGMR0MHI9MXVNQ8N57HZXQPALWCR389X3RWQG3WDTYFZYKJJINDRUH00X3DQ061OUOM45U6PGQ05HAGDQTFXZARQWH6V84Y1IB65P1HUMXTPEFZLEXW5WI9YOF9WIYPKBXFKT2BKELOL4KMSWN8UW07SSE5OHSTR1AL4JJB1D4803Z6ZUR4PT5YW4WF6G2XMVCCVYB9XL9K21HK9J5UV3X176Y2WNV4Z6IN6BKWDZD8DBB4A4JP1FQ49B9BRABE7JP0Q4E',[277, 977, 742, 29, 738, 837, 613, 919, 749, 595, 665, 746, 806, 577, 440, 925, 775, 735, 76, 238, 771, 669, 92, 496, 583, 348, 16, 658, 561, 565, 601, 2, 681]),('2019-10-11',1,'ZMFBEE7JLZB1LGAJFYLOC3IXK748RIRIWJORPNHIV8DLXCNDDSYRKJ4QMCH1XAE8OURL7OID5HNUTDJIW278T9V5L7I9VY8DSBZ7KQ48PM5AHP1ZYXNXBKXCAF1MJX on node1 (cluster.py:3602, query) 2026-04-30 17:31:07 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'N7310L0IH4HT6Y1I6WVS7R311904GT83BCYKWP4PX8AZ799FQVTMA09TTCDCLHBUH7G39HL',[362, 461, 978, 44, 432, 443, 513, 575, 2, 945, 230, 720, 952, 190, 848, 980, 469, 379, 418, 725, 419, 417, 39, 686, 307, 395, 294, 639, 295, 653, 377, 157, 122, 934, 61, 389, 202, 548, 583, 702, 185, 652, 535, 692, 249, 492, 105, 942, 226, 484, 818, 685, 51, 62, 773, 90, 607, 454, 309, 980, 995, 752, 618, 60, 128, 373, 496, 771, 257, 214, 246, 956, 598, 55, 986, 152, 495, 486, 954, 337, 320, 283, 301, 765, 499, 13, 835, 577, 492, 691, 396, 613, 982, 460, 854, 12, 466, 380, 195, 555, 382, 52, 769, 885, 294, 320, 86, 962, 113, 958, 588, 879, 517, 693, 670, 157, 578, 100, 954, 756, 340, 702, 799, 843, 345, 285, 796, 352, 468, 787, 454, 21, 397, 675, 596, 76, 855, 967, 248, 906, 295, 994, 42, 316, 313, 110, 923, 277, 743, 639, 65, 561, 576, 47, 603, 178, 177, 945, 820, 693, 963, 66, 906, 60, 375, 170, 826, 115, 802, 114, 72, 427, 145, 1, 444, 493, 834, 915, 816, 5 on node2 (cluster.py:3602, query) 2026-04-30 17:31:11 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'ZXNFJOMB0KECIGJCW10BFG39N4KAZMFZS2VLN4SPO0AU4FGY2ZYVWCFMG0KEC18O4F5CY1CUVK3IFDYBM5OKAOH4Z12PTR817MXEDNH2D630D0WP3V5LBR5O5QW6KQM1ANNDRRIJTYZHP815341TA57MEUB7RMS1ON9BI9F21ORBLQRE1DGDY0ACNMMS64106VLB12AE2883VZF64RXY24',[249, 774, 500, 832, 574, 252, 114, 649, 269, 151, 915, 66, 101, 849, 313, 213, 223, 471, 1, 710, 423, 290, 947, 337, 187, 708, 969, 607, 456, 685, 450, 238, 731, 740, 977, 199, 835, 557, 657, 981, 686, 888, 39, 621, 974, 283, 605, 121, 23, 493, 378, 507, 47, 405, 678, 868, 588, 848, 478, 757, 290, 387, 396, 289, 960, 775, 388, 382, 89, 526, 294, 501, 876, 727, 720, 29, 253, 805, 241, 324, 669, 869, 522, 697, 461, 110, 185, 230, 502, 895, 86, 9, 793, 616, 389, 114, 496, 26, 634, 303, 764, 26, 926, 862, 155, 694, 421, 694, 282, 275, 30, 655, 394, 500, 20, 797, 226, 51, 248, 760, 359, 203, 138, 576, 282, 555, 354, 163, 88, 943, 987, 347, 102, 888, 321, 108, 535, 979, 779, 327, 404, 704, 787, 169, 639, 723, 227, 409, 944, 5 on node1 (cluster.py:3602, query) 2026-04-30 17:31:19 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'RSADFWEKAVXZJV2D3V359X136SAPUZ7NOJ093O50T0KQ3SX8SGLLZ8IXH63GNKOPWEDUJC42SO7KCJGBCW1G1PGJV6J2FF01UB1WGH2SR4FJ3B0ETKJF51JLCVX8IO4L0BLSSZMLS36D8VUZ4GR8CAJZSVNLMOGSOOLMYNIPIG388BZKCQZHLSUWIV9NXS91Y11DPOJWJ5CK4JMBB9H29A8DZFC7AD9WARSL904ATC1SK7EVBNUYQMAOG0RXR7E56S9U48SUV9E45EPA485VTBSSMVWLMOUOHN3F5QZW6GWLQ9N49YCH3TFIMLAMF8MU6FX04KJWPSE3WJSKM84INYT706HAUW9C9MUNU1DMAPESH14VGJV48TJ8WNTJLNZAWCUAHIVY2M6CCO2PR3AH5P27Z59AD7LRSRRWI1IAK65HIE3608MBFQ9356YUC6GSM7OPFLHN138POT119ND50TLLO245VME9H7LQZ40HTV0YTU92AUIJK1SPNQ6E1MJSII12YV0RK51O21QZJA1JWNGF9LD6WSW16LJI9INMR8KWODNZ32KF4GVRY6PVARORWDS1Z8K1Z8RHF3SJPJVJ9R2FTVYA19VGUJMYTWO8M9U2IA0NH65XUQ2NX3DK7QZLMKSIZCC5RAJ3B79WTNQV5GDOX1FOE3LVCW27FJSIXTWVVV6FU0W6BKBDNO5RGR7NECGA5XXCA1MTG7YCVMIR254C0K37L26AWIHMT3F8R27J3PHFQ7G0ETPT8NLDLZE4OMNIW4DAAXPPUP2CJ0P6EL5QU1DWKPNN3SPWZGFHHLHJ6FOGKF3MTVP03LR5ORPKNCFFE8HNM83MBKTAHBXCKTC9SQ7V6APQHS',[271, 895, 549, 884, 972, 352, 909, 517, 211, 413, 817, 112, 793, 516, 308, 97 on node2 (cluster.py:3602, query) 2026-04-30 17:31:26 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'CDE3BM83SR2TXMM91VAVRE7WERDA6IEQZAA17OKAP3UHI9WVMUDD8M8O1Z6J0MI6LGHE3XVDNYS6XVSLYZ3ZM10P4L9D00UQOW18XKGNW7J1D0JJGMW70X97EFMD28TSB6OHNPCMHZD3LI8RN3C085FSANQK4XQ1HRWZMBKBIS85AJFUDGE7U09VICQDI3JFJKQNSWD0OWKD5CEBGXKZXTI4ROYL2HPNBVGQBDSBF2AJZWP77PIK7DM5DDCY7URZCMAWN64LIGLKPP81VG7ZCMZHE9LC3NPUXL2IYX8KTSRK4BNVWXGU5VVTP0MYM45BZCBGJ88GHLTXXYNWA8GMTVWA6AZEKADH3KCWN776YZUN0X7LOXNWPH8EX57UPOP2KBDUCZDLCANJI5TL5K7MRC6Y1F7QHUSX4GAWLCT0EEILDI2YY3G5DOT3EKAXQNH2ESCHI5F8LV7H36MO',[585, 534, 303, 926, 408, 681, 408, 353, 906, 511, 38, 781, 638, 232, 457, 791, 475, 641, 691, 855, 100, 914, 134, 810, 407, 668, 873, 877, 522, 535, 921, 596, 729, 275, 460, 557, 165, 972, 832, 456, 665, 953, 727, 516, 386, 307, 602, 329, 162, 546, 613, 850, 528, 822, 813, 533, 972, 192, 244, 453, 848, 347, 493, 916, 516, 564, 13, 533, 301, 304, 89, 523, 420, 767, 14, 271, 978, 951, 407, 947, 154, 884, 402, 738, 849, 478, 215, 372, 998, 98, 521, 920, 27, 659, 453, 128, 747, on node1 (cluster.py:3602, query) 2026-04-30 17:31:45 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'GCJC5P8P26VRISY5WW4EH9NTAQNUWTXL2NO0W1JBEYN7X6V63SHXPMWLSWVSMBA74CERB0D7C7G5HNUCS1OMHN57LYX4E6EYZT757BZIC62B2TDQMJ1DKMT448S2IW46357G2C19MTZN1P33BR795PK3DJU4Q4DJBO8IJ23J3I6C251XS5XMXS6KPD2KTZE1E6EW920LGTUQR3N2RIRA62QPGMP6LB9TZMHMMB6DN61HN69IB3LWH5B01CU6ES5FH6QZTMMV9PDUEGLYLJWG17AVX8QDVEEIYBQRTJF37UNYSWVOTX1DSCZ4H47RHOQT12ZD1T1P86O28N060G4DV6EHQV85ABQXCXDHTHTIFMU5X15CLR6X2DEGZAY37CCRTJIC4F0I1MOC0N8PV3OBG35JR65OO7U6BK21C2KIZCEZUNXZK25C2URPWF0KFGUTGLAQAR2QBPNNMPTHG3GWZ16CO7VOO3IMFNTB54W7DECLDZK41W0E89TVD7F2OOV3CXEJ',[280, 985, 52, 105, 280, 673, 970, 350, 244, 705, 56, 526, 546, 386, 533, 867, 501, 169, 483, 293, 897, 245, 403, 707, 648, 166, 462, 80, 284, 490, 193, 514, 281, 393, 663, 326, 117, 841, 808, 634, 732, 366, 942, 562, 273, 443, 587, 342, 866, 880, 207, 162, 162, 861, 484, 451, 844, 581, 447, 338, 260, 181, 72, 942, 922, 922, 449, 996, 449, 862, 911, 964, 956, 687, 319, 646, 383, 416, 209, 316, 529, 89, 3, 392, 669, 380, 118, on node2 (cluster.py:3602, query) 2026-04-30 17:31:55 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'021ORUXMWUOAVUSJ12U6BA02VAL50Z7YYZNHX9Y6GKGHMBS6XABOS7DG42W3FSDF9YLLG24WG5U0VB1L6GK9FPY7G6RQG21JTTRKYFPP5ZCDD0EIOY1PWAZ2GKZD99DDPMTNNZNX67AF3OOBBAQH8O7ZZ67HB463IL6SM0A5U2ZYYEA6920P6SA8LF06TC88CBB4C328YXVXZ1M2EKBKHV8MXRF54VARAZICNIBU16AQIZIA2M9GI5U5PSQS25K7A10XCVZB74Y5ICW7W47VGTHOCZLQ5H8QTGXAVFR0WTV7D63ABQUXPLUX8ALAMPZD9W9R61X26RD44RIL8GMSDUZN2JOYUITRZFTRU0GZC0EP12CXSAF1F3JIBIN16ZXJFHFUG3JW1UWCC3CKNDAA15BOFSN21X17HA3M6SYXJVEFRYF630EOBKB32OIA9K9AQGWNCK5ND4TDHF5IG91HV7DML9ZUIISW8PP29WB3LAJV62A9VCSVJE1JMX223O56FKP2FQ7Y9FNSK582TLBGSMFXZV4Y0RGSHW5NBFRZNBQL8PMV4PX1CTOKYAEG1LHESZ9R4JE6OP4H4D4DKHR9I7D88LIVZIHWOLDNXTMT8UT64M0NYEVAE21KRKCC07XM55F1KDW0P9YZQ0F9PX1AGHWV3D4PM28CYQ7D29BIEWGB0TAEFUWAQXVXA7L7O59GMLQ7ZW8V0FDUDU4CCY2L0CNNMC55GCIU9WUNK',[480, 32, 237, 16, 32, 466, 424, 421, 797, 287, 531, 367, 209, 783, 951, 536, 623, 977, 452, 703, 295, 798, 498, 348, 759, 134, 58, 606, 781, 645, 200, 913, 64, 186, 209, 836, 410, 549, 622, 90, 561, 232 on node1 (cluster.py:3602, query) 2026-04-30 17:32:03 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'10MUVMY4J4XOV7SVOL017YVWJGIV47X04VSC1J8DNIG1Y8O1CP3NLNUGSHL7O9VIFBVR37Q626N9UWCCIIOYT6T4GFH0QKX4CJ4RJ3DX5FKYB5X8TDN5DOLYBJGP5QFVLOT5ZPVNGYTLZDCAZB3VHOLXS6XVUAP3GNOE7BFS5SNT4P5QNX84BZWT8W4VN9XFMX5TM1NVW5V8WPQZLJ650OQNI38ND8IEMOG05YL6RUZGTU5QIELTTZD7IULE6PXFV2JBDHNY0KYWMFRD9WAPJETDABZ7IN0E05A8CBFKL5DHPKLT3W7LNZ8OOIJQF2T7CXWK3HKZDOJCY6SK5IWQ2EKP9K8XLM6PBUD0T2EOR6MFZ79XJ43793A3YOZO5K4AI4OPKZ6D02HOLXZA7KMJITSC0Z0IA5XV742GD0XNU1N059RIRH7DNG3T7D9NNIZCQJMQZTCF82GJORWA6QMVEIG4BC5SF6AH7JWKC0YRKOAW93KJX7GQQ72JZ8RTETV4VPV0XVH2ZM9M61MJLGTM7O5QW62CLJLTW3CAUMA',[429, 52, 145, 623, 758, 294, 69, 868, 708, 255, 77, 680, 58, 358, 110, 352, 877, 420, 324, 681, 824, 630, 988, 352, 899, 331, 9, 659, 704, 252, 835, 977, 345, 84, 607, 938, 503, 140, 941, 924, 152, 401, 397, 9, 197, 728, 206, 636, 210, 789, 611, 615, 587, 523, 867, 68, 710, 445, 56, 416, 576, 359, 520, 152, 476, 179, 643, 715, 864, 779, 922, 119, 138, 393, 240, 890, 382, 689, 562, 358, 97, on node2 (cluster.py:3602, query) 2026-04-30 17:32:27 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'E4HXPCTU8V667KCFZK697U501QXGFB9QIO9E6UUY39OMECGF62M8C6YDIPD2SMM7PAP905EDKB19BMLJR9B9DFLHTHJR3I1W6JM9TSQ3W2HLYX1E1A57NX0911DF1REBPVH8WBNSZMITUZF3AM8NWDHYJQZRBBUZ7WTGDPOLW72N2N5HP55VHM26ASKFC0T50R8CN45N1VI5W9AYJ84M2U2QU4FG4UUV9KJBBPDV2D1J9DYTRUYJAB0F5CXTRMUH1PF75OWJSMF3Y5WSXXASE5BF3HSU3JBPZC2ZSHGDLRDCOMVNAH9KF1CC460HERMJ1LLAAFW3IXAPK865UJY4XQ2646PPGV8DMD9V03Q7C1DOQISPUSS0GEIM4U70Y06O76RSC8WY5509RL0YN0S7BC652G5AXAI27AEBT4KMTK3KGQFWMOTFZU23XA5R3WBBKNG143WQQKJXZ9PHLRPWPGK5I8VL58AI7IXF1SBVEF52L9C6VFXRJZ5DQRBFQE6WZXA0DA8AUTJA1YHBRRLCWTDBJPWQST2RKIPKDNENA00VSJQ183E8ZQKU6ST79UC8S2FI1MPVKGLTPP5U2NM2JGDTXFCTF4VH9ZTB1Y8AJKKWP382XIIQ58UKMVMPS9VNXD8V56DYDEQCWW90VW06N86N70V9DMCGCB2NK89MPBIT95JBQEX6FDRP6WZSZPPTHYYEOH1OZ8G7VC42YRWDM5L10J9V7AV2UTREYRHZ7YJ4ZZBRGBY6KO1JWAILIEM457ILDP9XD4TBXKDUBQ4MUBLH6G8T5PJ17A8H5XUCTAFZ8ZUWK8QMN18VERKQ46Q9NGO6020D8JLFYHTUEENJNYQ1DNC03Z7WO02XHQ9D9QJ2S1FGZAQY107GOB1QRWFAOMP206VJGBSNO2JJLWHVDXCBFWM5MNHTKX0J4NZE7JK83YTIPY on node1 (cluster.py:3602, query) 2026-04-30 17:32:30 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'HM03U2J3V01ZBSKEN1O68CUH7I1O865PUK4USD2NSLF3ZS5WCXR099MCV1JSEFHFTDC7NS8DZ74C3E4E4YRJGAEPBZL6LI6XC112HT8VENPDHBITC98XF7EQ833YU9QEFR1CO9X2LB1KEM7NQ8Q6PCGUWV3VGLSTK9F5DUSW9RKHEJUDDWTJDY524R31JR1Q7AFS0W1VAFGCM5IHTHJ5JPPU4LT7UC9395P2KXICU8BFVAM89O5ZOLUEB61HVH5L2BSIPWMKSP2I4LELQGXOTMJVABTDP1L1GCKUQC2CPV0JCPPNLYX9VNFENWNLLU600NYIS3P12ADTQLY208AMSDLCD',[174, 489, 407, 253, 308, 991, 631, 213, 173, 706, 945, 606, 678, 12, 73, 510, 708, 434, 99, 73, 731, 196, 884, 537, 240, 903, 951, 41, 194, 531, 167, 494, 479, 439, 635, 832, 528, 299, 86, 139, 224, 986, 303, 697, 633, 901, 870, 66, 130, 491, 645, 234, 311, 966, 206, 292, 523, 458, 359, 386, 143, 79, 272, 258, 336, 843, 245, 533, 649, 994, 573, 720, 436, 951, 798, 361, 708, 296, 467, 797, 776, 932, 130, 478, 576, 842, 966, 166, 753, 139, 470, 158, 284, 250, 627, 136, 430, 791, 550, 847, 23, 908, 564, 213, 811, 170, 159, 1, 627, 897, 506, 834, 674, 720, 943, 735, 951, 160, 519, 127, 578, 397, on node2 (cluster.py:3602, query) 2026-04-30 17:32:38 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'NOV75UWKIW9G1CJHWFZI561WHEOHB9SSNA8LIF4PHSMY0L8EMTSIH1JGW2Y2HCRMAKFF3N7ZNPCB8BTGJ5LHTJ67UXFUM0L4D6K8TNIZ6DBQB4TXUWLYWOS9X8X7VW8FHMJVOYBVGNH8K6WSV0R8GX80O7QTGME9JFD4EIHPBQMGDU4SV207PHM7K7Q9780RNPCY3SKCU2X81RDHL0ZE1NX6NORKZM7HNCXJXH5QR8M9E82RMK1DXTVXBENS0LY99837A99O4N9JZDSLLO7YQ6A1DZZDNHH0HK96WV17K01EPLNH1QYP1IRJZF1BSHKYTNUD2WK04LOJ7SRCR3W9K60U7W3OE4DNDEDH5LCOHZTYVBC3P6Z2I6MD2ESDIKITBJXEU6SIEJJQB99YMZVA5PLZ37PM8DM7XQYOBYDM1T5NB72ID7RRI5VU328XRH3AWRRHCZILEPN5ZZRBQ7F01HSIMQ69J60024C97GZUJHYBAGWYNHOY1L72ILJLSQJCXJATXITVC1LFC48ZLUQ2CTVHI8DV5IAFD0PXPB04YFAW9A8SXLU43NQVDWJF30I09Y5KTMQFM20HO4IPPGNJCV95OB7DKZIHCSXJSNPFWC9XPM82FI10QBPFO4RLJ5RSEFWSXJKKVALEVTKDGKQQBK4WDKDSBTA',[726, 441, 780, 383, 598, 497, 761, 200, 703, 700, 839, 350, 830, 479, 831, 443, 476, 655, 865, 559, 107, 858, 392, 915, 345, 872, 97, 891, 704, 183, 280, 882, 632, 133, 155, 348, 959, 336, 699, 654, 604, 660, 359, 107, 458, 889, 962, 277, 980, 80, 368, 462, 98, 164, 560, on node1 (cluster.py:3602, query) 2026-04-30 17:32:43 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'VBO0XK2FCWZEHLLSQKH71T14Q073LHHM9OBXATEJWUAN4RSZV8SZKM5BRWQAIY0IJ8UHJ8TH9TT6WBJ4XIRAFUC76WDNNR0IX6QF9T0HA3Z3DKMUJDH3VLMCK62F2HMEWWEJ2CAPH0ADSXZ2JEF0SP3WSLA3T8GIZZQQZFOI9S082OZDHH3QXYRTPWTK16QN2KYOYDKU4ZFETIBSLPIRN2DUFQXYAHY1W5P1IP51JZWHJ465UKXLP0W4SR3VN5PGBZAYQRXZOYNRF0AX9EKI799130PY6KQTG65728U485179R6QT4BQ1M6XC5U2QK4BRUB1K73L97L6F2I7SF7WTF2TDOALLJF2BT7DVXUIW9YFP6268KJXR3KI0CAJE1FBGA7QX1YICKINSJ9PVF2J6SLXJWK7RORSILGA8L8IVVNJKRVG7JDU4H3F4X68IQ4AW8O2DRWB1UGJMTSLJ4GXBJYENDIX1QANJS2OL5LRNHMHNDYDY4MCFS46LRQSAZ5L9URYTA7N2H',[88, 529, 862, 950, 773, 276, 496, 674, 954, 735, 876, 643, 497, 931, 301, 791, 744, 605, 162, 264, 859, 758, 481, 216, 27, 729, 345, 303, 587, 466, 442, 377, 747, 695, 757, 84, 504, 861, 703, 139, 320, 397, 281, 104, 146, 258, 74, 580, 169, 841, 48, 202, 623, 45, 138, 741, 385, 397, 271, 687, 233, 64, 517, 187, 647, 2, 715, 362, 218, 885, 341, 321, 450, 158, 170, 366, 794, 902, 26, 611, 658, 935, 946, 895, 795, 571, 9 on node2 (cluster.py:3602, query) 2026-04-30 17:32:55 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'5EQYUYWYMWGWHS5MADJJEP1FO4GPBHV1HJALOTZR8NCZDNOACB88K0WROZQUIUSU2UJ64V7RYXDMMU83ETH2AJ8CX3VOJQDWNBSI3FV9BSZ0C0B3VIOV5YV8017857I02V2FAOXML6JJHNYIE6X0UTC7ME8VPOYRTMBRV5XNHUII0UG6YNR7XI4FO8C3C7F81J0C7TESNFY677YDFC8AK1L9GK3663KRBD7SC9R3PU8NPCUVCVAP0ULIRX904GGREAEIE1UUM3TVHYEDGYDE0YIHL9BZHZPRL22UO0VL5NUB5EGQUK3ATKF61SNYDIP86SB9FEBBJKV61HDQVR3DCAUJUCOUNHZLN493UGHVDX9T1H92BAMH7EWPCSEZB5IICYUDNZQMI65LGP4ZDQ8CUF1M667JMJDK1U42149F71VV9IDFI0ULCFOOPBO4E6V96DYEDZPSCCXQYV3NZP32YAXI9LEZUTBXESIXS1QC8MPZ4W1JTMTG5JGBTA5ICJWPA0NDQQTLNQEHJP374IUCPT7ZMJQ2NX6V2UZZ4RISQTWVTPXQD8BGLLHAJ7UQ50Z3CP24ZJIZ1LDZYQ64VKM1G47DOWFOU5NKMMGAPIE94JC9QYF2UW8RGYHYT99KN2PY80KHS8KVAS0JN5CX3T15WE4FBXSG0ZLBKMR2XWFM7MLMSWEWQ80NVIK0NALX8WNKEOMJM8RBC6FC8W3GU3IA4NDPV5MPEY46B7J8UWW2DD9L3DA5I7694BQYOFFF3YXXXLOKL0ZTJD96YMWGZMZ12R01P2JJAVL4D92BMFH7P0HQIOEX26EP4NBPEIGX8CJT3VQ0H23M3CT9LVCPMWMO18XY6ZPOV4Z2SIPFNDNG9ZH1H2OURLKUIOWSGMYQTODITCG5N9YYBXMZOG2AM7JG1UDWEU1N8PILBWVW8KWLOXGAGCTCLQ on node1 (cluster.py:3602, query) 2026-04-30 17:33:06 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'3Y76JHDIOLXIE0JLKZJOAWRSPMCT0FX6KZ63XJZZRXLHQ64CY3Y7FW3UX5AQ437E761ZFUBCB4PUVXYN5Q0V2KU31MIVGUQB809KU0S1P900XLXQW27OFK5VO1PX0XWTH8K3PATUG58KGHY8TT3UVNFHZ52ESTD1ASOP1TO8C9L8UFNC63OQ4HSX8UK5O',[154, 711, 741, 504, 251, 146, 533, 586, 570, 908, 535, 720, 514, 269, 987, 394, 685, 732, 978, 329, 85, 925, 364, 307, 719, 934, 819, 456, 979, 326, 372, 654, 369, 705, 992, 314, 659, 695, 880, 365]),('2019-10-11',1,'YZQXNR3FV2ZUI0B609142DST2MF1CT2UHF3U5T49MK5WY1BYQMJ0PMBH2HZDI3MEGIBSDMO3FKL2ELZR5OZ6OFKLCNGRMUJXMOR99BNGQ3CROYMBQURCDWD8BXZU8QWSF3QPD9',[238, 2, 810, 916, 76, 427, 326, 267, 8, 22, 579, 633, 165, 408, 976, 917, 599, 382, 503, 546, 861, 481, 984, 362, 387, 719, 389, 492, 431, 439, 19, 414, 906, 600, 109, 63, 195, 766, 638, 240, 810, 89, 367, 836, 716, 276, 865, 250, 653, 289, 821, 987, 662, 471, 529, 124, 821, 949, 819, 892, 22, 219, 206, 950, 449, 406, 878, 645, 987, 729, 837, 781, 624, 561, 389, 647, 247, 547, 393, 308, 888, 271, on node2 (cluster.py:3602, query) 2026-04-30 17:33:15 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'CT92KNRFMS6AEP7JRSQH6P0098JISKB7KGU8NPH69MDM18NZIYMJSAUQLUEG7HPD5R8N47S1AAUOV90M89UXFGSEUR8U71AHJSUCSL4TPI68BWACTEZZOY5Z0WVNL14TAF17UN2I895HVBP8K3I7Z4NG11BK3QETMIGJO2G8RX1BOSMSCP34717WUA8EEUJCIN81DR86X0PTA2LBYII749MP4ES3QD7MQO1YAVWBMG5035TLONR9CHZ1JHTLKJ1FSTX8M4NQEEF8N1LQDNOZ1RBNQ97QJ4M9LUQB51JDLN3EMGABCVVFFCSACH56QN49RNKIESKBHTAVS3KQMXQ5C0P7KEPGG2VSCM2URBVKZEQJGLOPNXNXECHF7PY222NSZSXYTIHCPIL7DY134KT056OOCUQU4ANDS201SONT62X1ICLUM4PYV0L9L6R14FLGX83Y4XRU8D0BDK6KF0LSCTQJ61T1R77WB2EBDH4H7CTRAYIAZZFBMTIEU97ND5Q5HNPAA5MEIN8AP3D2WLFJIDGWATIW8BV93CE0HW10L5L04PTE39PI41BZLJW3S5HDEROMHWFW0Y08EWZPT0RRML4PRV585V67I2SLCL7OEXDKT86CJCOQQ6852YSFDK1D56EMA5GQ63LTNYV06TSOZGFBO86DTHCHEH4H286PNKDABWKNPEXKRZ457UO2U473S2T7GNG3ZHMNDYE5XRFBJA64IA9EHCLPX75IGW4RXIK89OZZPE3Q6HPSVZPNI8L8H144ELHR8I12M75SUXQ59T11JAY894EP2NPVLMFT8K5R0JSPJ9TZ67VQTGAH0Z1HAWP37K39X6U4RDQEY02LX4FLMIQRSZS7HX6AIPOV271SOWCX3E08GKE3F4ADS3DMPQ1S1ZU3M51X3BP89MALR1GXWKHL7RH4DYM4Q',[769, 948, on node1 (cluster.py:3602, query) 2026-04-30 17:33:28 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'67726FUZJ96KLBK3Q51XIRYTRRKUF73U98N8G9XF7DPV9F7XK015MOB6EZ0WJZJZ3Y5CFH9IJJBS2TI26P70ANUQP4SB8NB2MS1F1YNVJ7T55VN1H2ONI14AZVE17HBCQY8VI97BQ231T9SAZN6O56879NY3X81O0I7NLL9OAUA7F0XCARW4KKH0J872TR6Q7XRK2SVD8KFCYXHFW139IFBGMWQG898CCQK3S8JKO5SY4IFKLYGB6NYJM6WLCPVJAFL0X5IUDNPKM7X099YHSO2VQ2WLOLZ09HVF700TB87KER0B2U67CYYEB4MKZPYSPCTXFNOLQYTN884LX7X5LTT8NY6O9T44DVVGS4UDT391KW761KFJCKY8OM5U7Q3UHYJP938SR59ABT42JYG30I6OM2DKGOB9VCBNSRNWJFSWIGL6GU3O2EGBUXXW8V4E62VC39NE3G2F29XM4VES0JW2AJCJFYZPVUIWD4G1O2TDWC8NKW8KH1H1M09AO0G1PXQ2RCHHC5CKO4MMN32SNV1WZIJFNEK41RLTY7LMBG263RR154PCJC5S2AZP02SX1K03727TRS2C1WB6FG126JSWEGQBCE2D1ZP9A2O3G7X1R0SUL2RBR75TR6DCU8HLOKUILRJEVRJP45LE3ICRCVNQ46P8GM2RWKGK1QWFJ5NUZVGWJXTGL5ZN31SY4BTIVDLK5SC2VCZVEFJA5ID5NAQ6EYC5Y4920ZWPFS3T7F67TL5V9UAUUPVRA5TVCBOTIHPID3AVC4061MSR0ABSBXDS4G04V9QQALWJBPNIZ9M2XY1X82KCGVJ',[586, 975, 908, 229, 176, 641, 213, 751, 862, 54, 445, 622, 61, 144, 516, 161, 499, 892, 930, 590]),('2019-10-11',1,'CSC9 on node2 (cluster.py:3602, query) 2026-04-30 17:33:31 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'XJ5O7CQQMUUZGM78CG10YLOAACFZBYBSKNH6AT2FWMPNW4103SXXOBINLE5LD3B2TWSL2W9P8V5ZNNKUZ6BBAQRBZZYXWHO7ZMOHPVM46R50S3ABCLFMHS3G6I9N8514UDC0D89J0N12K3RRC8DP73QE4GMDQRK1ICOF4HBFIRY9X64Y8KERU5I7TQZM8KH7E73RRA',[319, 920, 242, 11, 208, 596, 291, 855, 243, 913, 467, 161, 274, 113, 854, 349, 182, 839, 225, 43, 698, 185, 188, 908, 365, 207, 438, 599, 390, 331, 802, 859, 707, 883, 181, 601, 877, 323, 712, 142, 894, 296, 38, 144, 714, 123, 995, 956, 506, 833, 327, 791, 951, 976, 434, 549, 416, 938, 233, 295, 688, 639, 546, 951, 611, 328, 638, 596, 414, 369, 198, 401, 891, 669, 811, 185, 127, 292, 763, 749, 683, 437, 359, 75, 718, 682, 659, 354, 756, 327, 942, 151, 349, 648, 418, 661, 36, 635, 633, 666, 971, 379, 760, 416, 545, 577, 640, 967, 103, 47, 737, 339, 504, 38, 746, 889, 407, 390, 500, 59, 975, 214, 522, 553, 247, 335, 105, 286, 256, 684, 74, 903, 381, 84, 899, 175, 59, 558, 136, 113, 711, 930, 986, 10, 247, 654, 135, 29, 469, 958, 249, 582 on node1 (cluster.py:3602, query) 2026-04-30 17:33:39 [ 416 ] DEBUG : Executing query INSERT INTO polymorphic_table VALUES ('2019-10-11',0,'093IB5OWYTMIEHKAXJ05Q39GHCU5XXMS4RQDBZBKPI16E74Y5DDSXN7JX9Z0UXWAMWYJINS9TNOIVO1UBHI10F8A99L134QA3PTKHZDADKGLU4MYEY5AEYF9R1YGT08Z84DWNKRFWKLHRKWC50IF9TU6F66WKSJJ2ZC80I3UMN5X6ICED8EOVZ4RQ2NRU2MI4L1T1VKT4ACCNU8CJHD1QNIMG55WMNGK3NTRUFQRHULFLDFS2S7H9WMQLH50YBDPCBT1ZJPGQZI8IJQ3582RFDBZOR3VXWQ8FC4WM8DBDYF74WZ5ZW7GNQRZWYGPHGVGSS7NUTGQ2S7C3VWG2K0F5D4U26NYPLHZ5F5B3LF2L5CO6GPIOGOG0NYEUBOL2OJ5YVAC76MW90XU4XYFFKC896HR4A60HQ3JXQ6WFB9EU4XTWON5DRHGP80JRCYNTVSPS3YA15TWAM9Y2GYTRP3265XXLEQE7PZGZQHHIQH1PRTZ9QV2OO1D8HMVK1PPVWIKVPC9Y1F2J79RROVI3B80W73OS58PCQTH2HVGR5PEPCHUER1XGGRSZUIW1JZI097CGGTET6JHTQCHF0V8VBNGX0VP1T04H5AFALM3QW6YUQ3FIRUFZQ34F12IDUS2IFK1M8O5VVYII1FMSBLS57FBYD9NH79T9LZTJGTSUCRFXVRA',[398, 8, 265, 970, 354, 433, 209, 857, 364, 259, 946, 640, 475, 9, 53, 469, 914, 492, 310, 408, 926, 126, 709, 949, 861, 294, 534, 426, 228, 575, 702, 454, 110, 716, 730, 828, 365, 966, 189, 190, 743, 345, 367, 40, 960, 207, 140, 396, 948, 486, 916, 745, 508, on node2 (cluster.py:3602, query) 2026-04-30 17:33:49 [ 416 ] DEBUG : Executing query SYSTEM SYNC REPLICA polymorphic_table on node1 (cluster.py:3602, query) 2026-04-30 17:33:50 [ 416 ] DEBUG : Executing query SYSTEM SYNC REPLICA polymorphic_table on node2 (cluster.py:3602, query) 2026-04-30 17:33:53 [ 416 ] DEBUG : Executing query SELECT count() FROM polymorphic_table on node1 (cluster.py:3602, query) 2026-04-30 17:33:56 [ 416 ] DEBUG : Executing query SELECT count() FROM polymorphic_table on node2 (cluster.py:3602, query) 2026-04-30 17:34:04 [ 416 ] DEBUG : Executing query OPTIMIZE TABLE polymorphic_table FINAL on node1 (cluster.py:3602, query) 2026-04-30 17:34:39 [ 416 ] DEBUG : Executing query SYSTEM SYNC REPLICA polymorphic_table on node2 (cluster.py:3602, query) 2026-04-30 17:34:42 [ 416 ] DEBUG : Executing query SELECT count() FROM polymorphic_table on node1 (cluster.py:3602, query) 2026-04-30 17:34:45 [ 416 ] DEBUG : Executing query SELECT count() FROM polymorphic_table on node2 (cluster.py:3602, query) 2026-04-30 17:34:47 [ 416 ] DEBUG : Executing query SELECT DISTINCT part_type FROM system.parts WHERE table = 'polymorphic_table' AND active on node1 (cluster.py:3602, query) 2026-04-30 17:34:52 [ 416 ] DEBUG : Executing query SELECT DISTINCT part_type FROM system.parts WHERE table = 'polymorphic_table' AND active on node2 (cluster.py:3602, query) 2026-04-30 17:35:00 [ 416 ] DEBUG : Executing query ALTER TABLE polymorphic_table ADD COLUMN ss String on node1 (cluster.py:3602, query) ________________________ test_rename_parallel_same_node ________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_rename_parallel_same_node(started_cluster): table_name = "test_rename_parallel_same_node" drop_table(nodes, table_name) try: > create_table(nodes, table_name) test_rename_column/test.py:298: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_rename_column/test.py:91: in create_table node.query( helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 231, stderr: Received exception from server (version 24.8.14): E Code: 999. DB::Exception: Received from 172.16.8.6:9000. Coordination::Exception. Coordination::Exception: Coordination error: Operation timeout, path /clickhouse/tables/test/test_rename_parallel_same_node. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x00000000343c5254 E 1. ./build_docker/./src/Common/Exception.cpp:111: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001adb62c9 E 2. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000aa94445 E 3. ./src/Common/LoggingFormatStringHelpers.h:45: Coordination::Exception::Exception(Coordination::Error, FormatStringHelperImpl::type, std::type_identity::type>, char const*&&, String const&) @ 0x00000000264d5ab9 E 4. ./src/Common/ZooKeeper/IKeeper.h:501: Coordination::Exception::fromPath(Coordination::Error, String const&) @ 0x00000000264d3ea3 E 5. ./build_docker/./src/Common/ZooKeeper/ZooKeeper.cpp:0: zkutil::ZooKeeper::createAncestors(String const&) @ 0x000000002e65c63d E 6. ./build_docker/./src/Storages/StorageReplicatedMergeTree.cpp:0: DB::StorageReplicatedMergeTree::createTableIfNotExists(std::shared_ptr const&) @ 0x000000002ba809d4 E 7. ./build_docker/./src/Storages/StorageReplicatedMergeTree.cpp:0: DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(String const&, String const&, DB::LoadingStrictnessLevel, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>, DB::RenamingRestrictions, bool) @ 0x000000002ba7b20b E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:1460: std::shared_ptr std::allocate_shared[abi:v15007], String&, String&, DB::LoadingStrictnessLevel const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, DB::RenamingRestrictions&, bool&, void>(std::allocator const&, String&, String&, DB::LoadingStrictnessLevel const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&, DB::RenamingRestrictions&, bool&) @ 0x000000002cc14a9b E 9. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:962: DB::create(DB::StorageFactory::Arguments const&) @ 0x000000002cc0b45f E 10. ./build_docker/./src/Storages/StorageFactory.cpp:225: DB::StorageFactory::get(DB::ASTCreateQuery const&, String const&, std::shared_ptr, std::shared_ptr, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, DB::LoadingStrictnessLevel) const @ 0x000000002b7de4d3 E 11. ./build_docker/./src/Interpreters/InterpreterCreateQuery.cpp:1718: DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&, std::unique_ptr>&, DB::LoadingStrictnessLevel) @ 0x0000000029214594 E 12. ./build_docker/./src/Interpreters/InterpreterCreateQuery.cpp:0: DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x0000000029207f5c E 13. ./build_docker/./src/Interpreters/InterpreterCreateQuery.cpp:2045: DB::InterpreterCreateQuery::execute() @ 0x000000002921e755 E 14. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000029b9507c E 15. ./build_docker/./src/Interpreters/executeQuery.cpp:1397: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000029b8e405 E 16. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000002d1e4cec E 17. ./build_docker/./src/Server/TCPHandler.cpp:2527: DB::TCPHandler::run() @ 0x000000002d218c00 E 18. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x00000000345a29ef E 19. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x00000000345a35d7 E 20. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:219: Poco::PooledThread::run() @ 0x00000000344a5ceb E 21. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003449fe48 E 22. asan_thread_start(void*) @ 0x000000000aa49059 E 23. ? @ 0x00007ffa1cc6cac3 E 24. ? @ 0x00007ffa1ccfe850 E . (KEEPER_EXCEPTION) E (query: CREATE TABLE test_rename_parallel_same_node E ( E num UInt32, E num2 UInt32 DEFAULT num + 1 E ) E ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/test_rename_parallel_same_node', 'node1') E ORDER BY num PARTITION BY num % 100 E ) helpers/client.py:239: QueryRuntimeException ------------------------------ Captured log call ------------------------------- 2026-04-30 17:34:39 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel_same_node SYNC on node1 (cluster.py:3602, query) 2026-04-30 17:34:41 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel_same_node SYNC on node2 (cluster.py:3602, query) 2026-04-30 17:34:42 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel_same_node SYNC on node3 (cluster.py:3602, query) 2026-04-30 17:34:47 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel_same_node SYNC on node4 (cluster.py:3602, query) 2026-04-30 17:34:50 [ 410 ] DEBUG : Executing query CREATE TABLE test_rename_parallel_same_node ( num UInt32, num2 UInt32 DEFAULT num + 1 ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/test_rename_parallel_same_node', 'node1') ORDER BY num PARTITION BY num % 100 on node1 (cluster.py:3602, query) 2026-04-30 17:35:00 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel_same_node SYNC on node1 (cluster.py:3602, query) 2026-04-30 17:35:03 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel_same_node SYNC on node2 (cluster.py:3602, query) 2026-04-30 17:35:06 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel_same_node SYNC on node3 (cluster.py:3602, query) 2026-04-30 17:35:08 [ 410 ] DEBUG : Executing query DROP TABLE IF EXISTS test_rename_parallel_same_node SYNC on node4 (cluster.py:3602, query) _____________________________ test_cluster_groups ______________________________ [gw7] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_cluster_groups(started_cluster): for node in all_nodes: node.query( f"CREATE DATABASE cluster_groups ENGINE = Replicated('/test/cluster_groups', '{node.macros['shard']}', '{node.macros['replica']}');" ) # 1. system.clusters cluster_query = "SELECT host_name from system.clusters WHERE cluster = 'cluster_groups' ORDER BY host_name" expected_main = "main_node_1\nmain_node_2\n" expected_backup = "backup_node_1\nbackup_node_2\n" for node in [main_node_1, main_node_2]: assert_eq_with_retry(node, cluster_query, expected_main) for node in [backup_node_1, backup_node_2]: assert_eq_with_retry(node, cluster_query, expected_backup) # 2. Query execution depends only on your cluster group backup_node_1.stop_clickhouse() backup_node_2.stop_clickhouse() # OK main_node_1.query( "CREATE TABLE cluster_groups.table_1 (d Date, k UInt64) ENGINE=ReplicatedMergeTree ORDER BY k PARTITION BY toYYYYMM(d);" ) # Exception main_node_2.stop_clickhouse() settings = {"distributed_ddl_task_timeout": 5} assert "is not finished on 1 of 2 hosts" in main_node_1.query_and_get_error( "CREATE TABLE cluster_groups.table_2 (d Date, k UInt64) ENGINE=ReplicatedMergeTree ORDER BY k PARTITION BY toYYYYMM(d);", settings=settings, ) # 3. After start both groups are synced backup_node_1.start_clickhouse() backup_node_2.start_clickhouse() > main_node_2.start_clickhouse() test_replicated_database_cluster_groups/test.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = start_wait_sec = 60, retry_start = True, expected_to_fail = False def start_clickhouse( self, start_wait_sec=60, retry_start=True, expected_to_fail=False ): if not self.stay_alive: raise Exception( "ClickHouse can be started again only with stay_alive=True instance" ) start_time = time.time() time_to_sleep = 0.5 while start_time + start_wait_sec >= time.time(): # sometimes after SIGKILL (hard reset) server may refuse to start for some time # for different reasons. pid = self.get_process_pid("clickhouse") if pid is None: logging.debug("No clickhouse process running. Start new one.") self.exec_in_container( ["bash", "-c", "{} --daemon".format(self.clickhouse_start_command)], user=str(os.getuid()), ) if expected_to_fail: self.wait_start_failed(start_wait_sec + start_time - time.time()) return time.sleep(1) continue else: logging.debug("Clickhouse process running.") if expected_to_fail: raise Exception("ClickHouse was expected not to be running.") try: self.wait_start(start_wait_sec + start_time - time.time()) return except Exception as e: logging.warning( f"Current start attempt failed. Will kill {pid} just in case." ) self.exec_in_container( ["bash", "-c", f"kill -9 {pid}"], user="root", nothrow=True ) if not retry_start: raise time.sleep(time_to_sleep) > raise Exception("Cannot start ClickHouse, see additional info in logs") E Exception: Cannot start ClickHouse, see additional info in logs helpers/cluster.py:3992: Exception ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ------------------------------ Captured log setup ------------------------------ 2026-04-30 17:27:06 [ 458 ] INFO : Running tests in /ClickHouse/tests/integration/test_replicated_database_cluster_groups/test.py (cluster.py:2788, start) 2026-04-30 17:27:06 [ 458 ] DEBUG : Cluster start called. is_up=False (cluster.py:2795, start) 2026-04-30 17:27:06 [ 458 ] DEBUG : Docker networks for project roottestreplicateddatabaseclustergroups_gw7 are NETWORK ID NAME DRIVER SCOPE (cluster.py:855, print_all_docker_pieces) 2026-04-30 17:27:07 [ 458 ] DEBUG : Docker containers for project roottestreplicateddatabaseclustergroups_gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:863, print_all_docker_pieces) 2026-04-30 17:27:07 [ 458 ] DEBUG : Docker volumes for project roottestreplicateddatabaseclustergroups_gw7 are DRIVER VOLUME NAME (cluster.py:871, print_all_docker_pieces) 2026-04-30 17:27:07 [ 458 ] DEBUG : Cleanup called (cluster.py:876, cleanup) 2026-04-30 17:27:07 [ 458 ] DEBUG : Docker networks for project roottestreplicateddatabaseclustergroups_gw7 are NETWORK ID NAME DRIVER SCOPE (cluster.py:855, print_all_docker_pieces) 2026-04-30 17:27:08 [ 458 ] DEBUG : Docker containers for project roottestreplicateddatabaseclustergroups_gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:863, print_all_docker_pieces) 2026-04-30 17:27:08 [ 458 ] DEBUG : Docker volumes for project roottestreplicateddatabaseclustergroups_gw7 are DRIVER VOLUME NAME (cluster.py:871, print_all_docker_pieces) 2026-04-30 17:27:08 [ 458 ] DEBUG : Command:docker container list --all --filter name='^/roottestreplicateddatabaseclustergroups_gw7_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2026-04-30 17:27:08 [ 458 ] DEBUG : Unstopped containers: {} (cluster.py:890, cleanup) 2026-04-30 17:27:08 [ 458 ] DEBUG : No running containers for project: roottestreplicateddatabaseclustergroups_gw7 (cluster.py:904, cleanup) 2026-04-30 17:27:08 [ 458 ] DEBUG : Trying to prune unused networks... (cluster.py:910, cleanup) 2026-04-30 17:27:08 [ 458 ] DEBUG : Trying to prune unused images... (cluster.py:926, cleanup) 2026-04-30 17:27:08 [ 458 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2026-04-30 17:27:09 [ 458 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:121, run_and_check) 2026-04-30 17:27:09 [ 458 ] DEBUG : Images pruned (cluster.py:929, cleanup) 2026-04-30 17:27:09 [ 458 ] DEBUG : Trying to prune unused volumes... (cluster.py:935, cleanup) 2026-04-30 17:27:09 [ 458 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2026-04-30 17:27:09 [ 458 ] DEBUG : Stdout:4 (cluster.py:121, run_and_check) 2026-04-30 17:27:09 [ 458 ] DEBUG : Setup directory for instance: main_node_1 (cluster.py:2808, start) 2026-04-30 17:27:09 [ 458 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4534, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Create directory for common tests configuration (cluster.py:4539, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Copy common configuration from helpers (cluster.py:4559, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Generate and write macros file (cluster.py:4602, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Copy custom test config files [] to /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_1/configs/config.d (cluster.py:4632, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_1/database (cluster.py:4649, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_1/logs (cluster.py:4660, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon; coproc tail -f /dev/null; wait $$!" (cluster.py:4746, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Setup directory for instance: main_node_2 (cluster.py:2808, start) 2026-04-30 17:27:09 [ 458 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4534, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Create directory for common tests configuration (cluster.py:4539, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Copy common configuration from helpers (cluster.py:4559, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Generate and write macros file (cluster.py:4602, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Copy custom test config files [] to /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/configs/config.d (cluster.py:4632, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/database (cluster.py:4649, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/logs (cluster.py:4660, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon; coproc tail -f /dev/null; wait $$!" (cluster.py:4746, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Setup directory for instance: backup_node_1 (cluster.py:2808, start) 2026-04-30 17:27:09 [ 458 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4534, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Create directory for common tests configuration (cluster.py:4539, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Copy common configuration from helpers (cluster.py:4559, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Generate and write macros file (cluster.py:4602, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_replicated_database_cluster_groups/configs/backup_group.xml'] to /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_1/configs/config.d (cluster.py:4632, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_1/database (cluster.py:4649, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_1/logs (cluster.py:4660, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon; coproc tail -f /dev/null; wait $$!" (cluster.py:4746, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Setup directory for instance: backup_node_2 (cluster.py:2808, start) 2026-04-30 17:27:09 [ 458 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4534, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Create directory for common tests configuration (cluster.py:4539, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Copy common configuration from helpers (cluster.py:4559, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Generate and write macros file (cluster.py:4602, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_replicated_database_cluster_groups/configs/backup_group.xml'] to /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/configs/config.d (cluster.py:4632, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/database (cluster.py:4649, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/logs (cluster.py:4660, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon; coproc tail -f /dev/null; wait $$!" (cluster.py:4746, create_dir) 2026-04-30 17:27:09 [ 458 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:1e0b53d756cf', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/.env (cluster.py:86, _create_env_file) 2026-04-30 17:27:09 [ 458 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2026-04-30 17:27:09 [ 458 ] DEBUG : No config file found (config.py:28, find_config_file) 2026-04-30 17:27:09 [ 458 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2026-04-30 17:27:09 [ 458 ] DEBUG : No config file found (config.py:28, find_config_file) 2026-04-30 17:27:09 [ 458 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 824 (connectionpool.py:547, _make_request) 2026-04-30 17:27:09 [ 458 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/.env', '--project-name', 'roottestreplicateddatabaseclustergroups_gw7', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/docker-compose.yml', 'pull'] (cluster.py:113, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo1 ... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo2 ... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo3 ... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling main_node_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling backup_node_2 ... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling backup_node_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling main_node_2 ... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling main_node_2 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling backup_node_2 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo2 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling backup_node_1 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling main_node_1 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling main_node_1 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling main_node_1 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling backup_node_2 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling backup_node_2 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo2 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo2 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo2 ... done (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling backup_node_2 ... done (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling main_node_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling main_node_2 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling main_node_2 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling main_node_2 ... done (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling backup_node_1 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling backup_node_1 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling backup_node_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo1 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo3 ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo1 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo1 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo3 ... digest: sha256:bf725030a292d5daab... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo3 ... status: image is up to date for a... (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Stderr:Pulling zoo3 ... done (cluster.py:123, run_and_check) 2026-04-30 17:28:37 [ 458 ] DEBUG : Setup ZooKeeper (cluster.py:2849, start) 2026-04-30 17:28:37 [ 458 ] DEBUG : Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/log', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/config', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/coordination', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/log', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/config', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/coordination', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/log', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/config', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/coordination'] (cluster.py:2850, start) 2026-04-30 17:28:37 [ 458 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/.env', '--project-name', 'roottestreplicateddatabaseclustergroups_gw7', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--verbose', 'up', '-d'] (cluster.py:113, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.config.config.find: Using configuration files: /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.docker_client.get_client: docker-compose version 1.29.2, build unknown (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:docker-py version: (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:CPython version: 3.10.12 (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:OpenSSL version: OpenSSL 3.0.2 15 Mar 2022 (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.docker_client.get_client: Docker base_url: http+docker://localhost (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.docker_client.get_client: Docker version: Platform={'Name': 'Docker Engine - Community'}, Components=[{'Name': 'Engine', 'Version': '23.0.6', 'Details': {'ApiVersion': '1.42', 'Arch': 'amd64', 'BuildTime': '2023-05-05T21:18:13.000000000+00:00', 'Experimental': 'false', 'GitCommit': '9dbdbd4', 'GoVersion': 'go1.19.9', 'KernelVersion': '5.15.0-130-generic', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': '1.7.25', 'Details': {'GitCommit': 'bcc810d6b9066471b0b6fa75f557a15a1cbf31bb'}}, {'Name': 'runc', 'Version': '1.2.4', 'Details': {'GitCommit': 'v1.2.4-0-g6c52b3f'}}, {'Name': 'docker-init', 'Version': '0.19.0', 'Details': {'GitCommit': 'de40ad0'}}], Version=23.0.6, ApiVersion=1.42, MinAPIVersion=1.12, GitCommit=9dbdbd4, GoVersion=go1.19.9, Os=linux, Arch=amd64, KernelVersion=5.15.0-130-generic, BuildTime=2023-05-05T21:18:13.000000000+00:00 (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottestreplicateddatabaseclustergroupsgw7_default') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker info <- () (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker info -> {'Architecture': 'x86_64', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'BridgeNfIp6tables': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'BridgeNfIptables': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'CPUSet': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'CPUShares': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'CgroupDriver': 'cgroupfs', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'CgroupVersion': '2', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'ContainerdCommit': {'Expected': 'bcc810d6b9066471b0b6fa75f557a15a1cbf31bb', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'ID': 'bcc810d6b9066471b0b6fa75f557a15a1cbf31bb'}, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Containers': 42, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottestreplicateddatabaseclustergroups_gw7_default') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.network.ensure: Creating network "roottestreplicateddatabaseclustergroups_gw7_default" with the default driver (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_network <- (name='roottestreplicateddatabaseclustergroups_gw7_default', driver=None, options=None, ipam=None, internal=False, enable_ipv6=False, labels={'com.docker.compose.project': 'roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.network': 'default', 'com.docker.compose.version': '1.29.2'}, attachable=True, check_duplicate=True) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_network -> {'Id': 'ca777dbaa9f1abfd55981961c04f597e073b40407423e433a0aaf439d5608255', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Warning': ''} (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroupsgw7', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroupsgw7', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroupsgw7', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroupsgw7', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroupsgw7', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroupsgw7', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroupsgw7', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroupsgw7', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {, , } (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroupsgw7', 'com.docker.compose.service=zoo2', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_zoo2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {ServiceName(project='roottestreplicateddatabaseclustergroups_gw7', service='zoo2', number=1)} (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for ServiceName(project='roottestreplicateddatabaseclustergroups_gw7', service='zoo2', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroupsgw7', 'com.docker.compose.service=zoo3', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.service.build_container_labels: Added config hash: 59c7743b3dd07371b5abbee129e5befe7b330c6fc8e5dbd50a6a5faf056b0963 (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (links=[], port_bindings={}, binds=[], volumes_from=[], privileged=False, network_mode='roottestreplicateddatabaseclustergroups_gw7_default', devices=None, device_requests=None, dns=None, dns_opt=['attempts:2', 'timeout:1', 'inet6', 'rotate'], dns_search=None, restart_policy={'Name': 'always', 'MaximumRetryCount': 0}, runtime=None, cap_add=['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], cap_drop=None, mem_limit=None, mem_reservation=None, memswap_limit=None, ulimits=None, log_config={'Type': '', 'Config': {}}, extra_hosts=None, read_only=None, pid_mode=None, security_opt=['label:disable'], ipc_mode=None, cgroup_parent=None, cpu_quota=None, shm_size=None, sysctls=None, pids_limit=None, tmpfs=None, oom_kill_disable=None, oom_score_adj=None, mem_swappiness=None, group_add=None, userns_mode=None, init=None, init_path=None, isolation=None, cpu_count=None, cpu_percent=None, nano_cpus=None, volume_driver=None, cpuset_cpus=None, cpu_shares=None, storage_opt=None, blkio_weight=None, blkio_weight_device=None, device_read_bps=None, device_read_iops=None, device_write_bps=None, device_write_iops=None, mounts=[{'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}], device_cgroup_rules=None, cpu_period=None, cpu_rt_period=None, cpu_rt_runtime=None) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Links': [], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'LogConfig': {'Config': {}, 'Type': ''}, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Mounts': [{'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/coordination', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Target': '/var/lib/clickhouse-keeper', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Type': 'bind'}, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: {'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container <- (entrypoint='clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config2.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log', image='altinityinfra/integration-test:1e0b53d756cf', user='0', volumes={}, name='roottestreplicateddatabaseclustergroups_gw7_zoo2_1', detach=True, environment=[], labels={'com.docker.compose.project': 'roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.service': 'zoo2', 'com.docker.compose.oneoff': 'False', 'com.docker.compose.project.working_dir': '/ClickHouse/tests/integration/compose', 'com.docker.compose.project.config_files': '/ClickHouse/tests/integration/compose/docker_compose_keeper.yml', 'com.docker.compose.project.environment_file': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/.env', 'com.docker.compose.container-number': '1', 'com.docker.compose.version': '1.29.2', 'com.docker.compose.config-hash': '59c7743b3dd07371b5abbee129e5befe7b330c6fc8e5dbd50a6a5faf056b0963'}, host_config={'NetworkMode': 'roottestreplicateddatabaseclustergroups_gw7_default', 'RestartPolicy': {'Name': 'always', 'MaximumRetryCount': 0}, 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], 'SecurityOpt': ['label:disable'], 'VolumesFrom': [], 'Binds': [], 'PortBindings': {}, 'Links': [], 'LogConfig': {'Type': '', 'Config': {}}, 'Mounts': [{'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper2/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}]}, networking_config={'EndpointsConfig': {'roottestreplicateddatabaseclustergroups_gw7_default': {'Aliases': ['zoo2'], 'IPAMConfig': {}}}}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_zoo3_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {ServiceName(project='roottestreplicateddatabaseclustergroups_gw7', service='zoo3', number=1)} (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for ServiceName(project='roottestreplicateddatabaseclustergroups_gw7', service='zoo3', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestreplicateddatabaseclustergroupsgw7', 'com.docker.compose.service=zoo1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_zoo1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {ServiceName(project='roottestreplicateddatabaseclustergroups_gw7', service='zoo1', number=1)} (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for ServiceName(project='roottestreplicateddatabaseclustergroups_gw7', service='zoo1', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': '8b6a8ee6e6ef3a0089bedf6ec48f6f51c2bfe6d5270aa44d54812aa4aea10025', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Warnings': []} (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('8b6a8ee6e6ef3a0089bedf6ec48f6f51c2bfe6d5270aa44d54812aa4aea10025') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Args': ['keeper', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: '--config=/etc/clickhouse-keeper/keeper_config2.xml', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: '--log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: '--errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Cmd': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('8b6a8ee6e6ef3a0089bedf6ec48f6f51c2bfe6d5270aa44d54812aa4aea10025', 'roottestreplicateddatabaseclustergroups_gw7_default') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.service.build_container_labels: Added config hash: 468080755d5754f8824c25f536b96a39d35c2f1139fb43a5a80ea8383a8a83f6 (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (links=[], port_bindings={}, binds=[], volumes_from=[], privileged=False, network_mode='roottestreplicateddatabaseclustergroups_gw7_default', devices=None, device_requests=None, dns=None, dns_opt=['attempts:2', 'timeout:1', 'inet6', 'rotate'], dns_search=None, restart_policy={'Name': 'always', 'MaximumRetryCount': 0}, runtime=None, cap_add=['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], cap_drop=None, mem_limit=None, mem_reservation=None, memswap_limit=None, ulimits=None, log_config={'Type': '', 'Config': {}}, extra_hosts=None, read_only=None, pid_mode=None, security_opt=['label:disable'], ipc_mode=None, cgroup_parent=None, cpu_quota=None, shm_size=None, sysctls=None, pids_limit=None, tmpfs=None, oom_kill_disable=None, oom_score_adj=None, mem_swappiness=None, group_add=None, userns_mode=None, init=None, init_path=None, isolation=None, cpu_count=None, cpu_percent=None, nano_cpus=None, volume_driver=None, cpuset_cpus=None, cpu_shares=None, storage_opt=None, blkio_weight=None, blkio_weight_device=None, device_read_bps=None, device_read_iops=None, device_write_bps=None, device_write_iops=None, mounts=[{'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}], device_cgroup_rules=None, cpu_period=None, cpu_rt_period=None, cpu_rt_runtime=None) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Links': [], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'LogConfig': {'Config': {}, 'Type': ''}, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Mounts': [{'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/coordination', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Target': '/var/lib/clickhouse-keeper', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Type': 'bind'}, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: {'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container <- (entrypoint='clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config3.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log', image='altinityinfra/integration-test:1e0b53d756cf', user='0', volumes={}, name='roottestreplicateddatabaseclustergroups_gw7_zoo3_1', detach=True, environment=[], labels={'com.docker.compose.project': 'roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.service': 'zoo3', 'com.docker.compose.oneoff': 'False', 'com.docker.compose.project.working_dir': '/ClickHouse/tests/integration/compose', 'com.docker.compose.project.config_files': '/ClickHouse/tests/integration/compose/docker_compose_keeper.yml', 'com.docker.compose.project.environment_file': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/.env', 'com.docker.compose.container-number': '1', 'com.docker.compose.version': '1.29.2', 'com.docker.compose.config-hash': '468080755d5754f8824c25f536b96a39d35c2f1139fb43a5a80ea8383a8a83f6'}, host_config={'NetworkMode': 'roottestreplicateddatabaseclustergroups_gw7_default', 'RestartPolicy': {'Name': 'always', 'MaximumRetryCount': 0}, 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], 'SecurityOpt': ['label:disable'], 'VolumesFrom': [], 'Binds': [], 'PortBindings': {}, 'Links': [], 'LogConfig': {'Type': '', 'Config': {}}, 'Mounts': [{'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper3/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}]}, networking_config={'EndpointsConfig': {'roottestreplicateddatabaseclustergroups_gw7_default': {'Aliases': ['zoo3'], 'IPAMConfig': {}}}}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('altinityinfra/integration-test:1e0b53d756cf') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Config': {'ArgsEscaped': True, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Cmd': ['/bin/sh', '-c', 'sleep 1'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Entrypoint': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.service.build_container_labels: Added config hash: 8a96067ee83ece413b5c1e3ab87e6fc0237c42f924d9743c05bc1b2b59800bfe (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (links=[], port_bindings={}, binds=[], volumes_from=[], privileged=False, network_mode='roottestreplicateddatabaseclustergroups_gw7_default', devices=None, device_requests=None, dns=None, dns_opt=['attempts:2', 'timeout:1', 'inet6', 'rotate'], dns_search=None, restart_policy={'Name': 'always', 'MaximumRetryCount': 0}, runtime=None, cap_add=['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], cap_drop=None, mem_limit=None, mem_reservation=None, memswap_limit=None, ulimits=None, log_config={'Type': '', 'Config': {}}, extra_hosts=None, read_only=None, pid_mode=None, security_opt=['label:disable'], ipc_mode=None, cgroup_parent=None, cpu_quota=None, shm_size=None, sysctls=None, pids_limit=None, tmpfs=None, oom_kill_disable=None, oom_score_adj=None, mem_swappiness=None, group_add=None, userns_mode=None, init=None, init_path=None, isolation=None, cpu_count=None, cpu_percent=None, nano_cpus=None, volume_driver=None, cpuset_cpus=None, cpu_shares=None, storage_opt=None, blkio_weight=None, blkio_weight_device=None, device_read_bps=None, device_read_iops=None, device_write_bps=None, device_write_iops=None, mounts=[{'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}], device_cgroup_rules=None, cpu_period=None, cpu_rt_period=None, cpu_rt_runtime=None) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Links': [], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'LogConfig': {'Config': {}, 'Type': ''}, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Mounts': [{'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Source': '/clickhouse', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Target': '/usr/bin/clickhouse-keeper', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Type': 'bind'}, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: {'ReadOnly': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container <- (entrypoint='clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config1.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log', image='altinityinfra/integration-test:1e0b53d756cf', user='0', volumes={}, name='roottestreplicateddatabaseclustergroups_gw7_zoo1_1', detach=True, environment=[], labels={'com.docker.compose.project': 'roottestreplicateddatabaseclustergroups_gw7', 'com.docker.compose.service': 'zoo1', 'com.docker.compose.oneoff': 'False', 'com.docker.compose.project.working_dir': '/ClickHouse/tests/integration/compose', 'com.docker.compose.project.config_files': '/ClickHouse/tests/integration/compose/docker_compose_keeper.yml', 'com.docker.compose.project.environment_file': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/.env', 'com.docker.compose.container-number': '1', 'com.docker.compose.version': '1.29.2', 'com.docker.compose.config-hash': '8a96067ee83ece413b5c1e3ab87e6fc0237c42f924d9743c05bc1b2b59800bfe'}, host_config={'NetworkMode': 'roottestreplicateddatabaseclustergroups_gw7_default', 'RestartPolicy': {'Name': 'always', 'MaximumRetryCount': 0}, 'CapAdd': ['SYS_PTRACE', 'NET_ADMIN', 'IPC_LOCK', 'SYS_NICE'], 'DnsOptions': ['attempts:2', 'timeout:1', 'inet6', 'rotate'], 'SecurityOpt': ['label:disable'], 'VolumesFrom': [], 'Binds': [], 'PortBindings': {}, 'Links': [], 'LogConfig': {'Type': '', 'Config': {}}, 'Mounts': [{'Target': '/usr/bin/clickhouse-keeper', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/usr/bin/clickhouse', 'Source': '/clickhouse', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/log/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/log', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/etc/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/config', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse-keeper', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}, {'Target': '/var/lib/clickhouse', 'Source': '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/keeper1/coordination', 'Type': 'bind', 'ReadOnly': None}]}, networking_config={'EndpointsConfig': {'roottestreplicateddatabaseclustergroups_gw7_default': {'Aliases': ['zoo1'], 'IPAMConfig': {}}}}) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('8b6a8ee6e6ef3a0089bedf6ec48f6f51c2bfe6d5270aa44d54812aa4aea10025', 'roottestreplicateddatabaseclustergroups_gw7_default', aliases=['8b6a8ee6e6ef', 'zoo2'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start <- ('8b6a8ee6e6ef3a0089bedf6ec48f6f51c2bfe6d5270aa44d54812aa4aea10025') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': 'a3aaad7e9aec37dc5a46a84f7fbd31dcc2103b58096974a0b773fd8fedd1deec', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Warnings': []} (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('a3aaad7e9aec37dc5a46a84f7fbd31dcc2103b58096974a0b773fd8fedd1deec') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Args': ['keeper', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: '--config=/etc/clickhouse-keeper/keeper_config1.xml', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: '--log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: '--errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Cmd': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('a3aaad7e9aec37dc5a46a84f7fbd31dcc2103b58096974a0b773fd8fedd1deec', 'roottestreplicateddatabaseclustergroups_gw7_default') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': 'bb64bfee05dcb66df484111e3bf6eb2277e2c06d1ae853288f423202c9e2ca3d', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Warnings': []} (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('bb64bfee05dcb66df484111e3bf6eb2277e2c06d1ae853288f423202c9e2ca3d') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('a3aaad7e9aec37dc5a46a84f7fbd31dcc2103b58096974a0b773fd8fedd1deec', 'roottestreplicateddatabaseclustergroups_gw7_default', aliases=['zoo1', 'a3aaad7e9aec'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Args': ['keeper', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: '--config=/etc/clickhouse-keeper/keeper_config3.xml', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: '--log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: '--errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log'], (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Cmd': None, (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('bb64bfee05dcb66df484111e3bf6eb2277e2c06d1ae853288f423202c9e2ca3d', 'roottestreplicateddatabaseclustergroups_gw7_default') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start <- ('a3aaad7e9aec37dc5a46a84f7fbd31dcc2103b58096974a0b773fd8fedd1deec') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('bb64bfee05dcb66df484111e3bf6eb2277e2c06d1ae853288f423202c9e2ca3d', 'roottestreplicateddatabaseclustergroups_gw7_default', aliases=['zoo3', 'bb64bfee05dc'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start <- ('bb64bfee05dcb66df484111e3bf6eb2277e2c06d1ae853288f423202c9e2ca3d') (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start -> None (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='roottestreplicateddatabaseclustergroups_gw7', service='zoo2', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_zoo2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start -> None (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='roottestreplicateddatabaseclustergroups_gw7', service='zoo3', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_zoo3_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start -> None (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='roottestreplicateddatabaseclustergroups_gw7', service='zoo1', number=1) (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_zoo1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2026-04-30 17:28:45 [ 458 ] DEBUG : Wait ZooKeeper to start (cluster.py:2504, wait_zookeeper_to_start) 2026-04-30 17:28:45 [ 458 ] DEBUG : get_instance_ip instance_name=zoo1 (cluster.py:2135, get_instance_ip) 2026-04-30 17:28:45 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicateddatabaseclustergroups_gw7_zoo1_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:28:46 [ 458 ] DEBUG : get_kazoo_client: zoo1, ip:172.16.23.3, port:2181, use_ssl:False (cluster.py:3286, get_kazoo_client) 2026-04-30 17:28:46 [ 458 ] INFO : Connecting to 172.16.23.3(172.16.23.3):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:28:46 [ 458 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:28:46 [ 458 ] INFO : Connecting to 172.16.23.3(172.16.23.3):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:28:46 [ 458 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:28:46 [ 458 ] INFO : Connecting to 172.16.23.3(172.16.23.3):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:28:46 [ 458 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:28:46 [ 458 ] INFO : Connecting to 172.16.23.3(172.16.23.3):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:28:46 [ 458 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:28:47 [ 458 ] INFO : Connecting to 172.16.23.3(172.16.23.3):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:28:47 [ 458 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2026-04-30 17:28:50 [ 458 ] INFO : Connecting to 172.16.23.3(172.16.23.3):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:28:50 [ 458 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2026-04-30 17:28:50 [ 458 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2026-04-30 17:28:50 [ 458 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2026-04-30 17:28:50 [ 458 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2026-04-30 17:28:50 [ 458 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2026-04-30 17:28:50 [ 458 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2026-04-30 17:28:50 [ 458 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2026-04-30 17:28:50 [ 458 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2026-04-30 17:28:50 [ 458 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2026-04-30 17:28:50 [ 458 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2026-04-30 17:28:50 [ 458 ] DEBUG : get_instance_ip instance_name=zoo2 (cluster.py:2135, get_instance_ip) 2026-04-30 17:28:50 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicateddatabaseclustergroups_gw7_zoo2_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:28:50 [ 458 ] DEBUG : get_kazoo_client: zoo2, ip:172.16.23.2, port:2181, use_ssl:False (cluster.py:3286, get_kazoo_client) 2026-04-30 17:28:50 [ 458 ] INFO : Connecting to 172.16.23.2(172.16.23.2):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:28:50 [ 458 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2026-04-30 17:28:50 [ 458 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2026-04-30 17:28:50 [ 458 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2026-04-30 17:28:50 [ 458 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2026-04-30 17:28:50 [ 458 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2026-04-30 17:28:50 [ 458 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2026-04-30 17:28:50 [ 458 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2026-04-30 17:28:50 [ 458 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2026-04-30 17:28:50 [ 458 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2026-04-30 17:28:50 [ 458 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2026-04-30 17:28:50 [ 458 ] DEBUG : get_instance_ip instance_name=zoo3 (cluster.py:2135, get_instance_ip) 2026-04-30 17:28:50 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicateddatabaseclustergroups_gw7_zoo3_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:28:50 [ 458 ] DEBUG : get_kazoo_client: zoo3, ip:172.16.23.4, port:2181, use_ssl:False (cluster.py:3286, get_kazoo_client) 2026-04-30 17:28:50 [ 458 ] INFO : Connecting to 172.16.23.4(172.16.23.4):2181, use_ssl: False (connection.py:650, _connect) 2026-04-30 17:28:50 [ 458 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2026-04-30 17:28:50 [ 458 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2026-04-30 17:28:50 [ 458 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2026-04-30 17:28:50 [ 458 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2026-04-30 17:28:50 [ 458 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2026-04-30 17:28:50 [ 458 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2026-04-30 17:28:50 [ 458 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2026-04-30 17:28:50 [ 458 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2026-04-30 17:28:51 [ 458 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2026-04-30 17:28:51 [ 458 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2026-04-30 17:28:51 [ 458 ] DEBUG : All instances of ZooKeeper Secure started (cluster.py:2519, wait_zookeeper_nodes_to_start) 2026-04-30 17:28:51 [ 458 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker-compose --env-file /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/.env --project-name roottestreplicateddatabaseclustergroups_gw7 --file /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/docker-compose.yml --file /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_1/docker-compose.yml --file /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/docker-compose.yml up -d --no-recreate') (cluster.py:3146, start) 2026-04-30 17:28:51 [ 458 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/.env', '--project-name', 'roottestreplicateddatabaseclustergroups_gw7', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/docker-compose.yml', 'up', '-d', '--no-recreate'] (cluster.py:113, run_and_check) 2026-04-30 17:29:01 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_main_node_1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:29:01 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:29:01 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:29:01 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:29:01 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:29:01 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:29:01 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_main_node_1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:29:01 [ 458 ] DEBUG : Stderr:Creating roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:29:01 [ 458 ] DEBUG : ClickHouse instance created (cluster.py:3154, start) 2026-04-30 17:29:01 [ 458 ] DEBUG : get_instance_ip instance_name=main_node_1 (cluster.py:2135, get_instance_ip) 2026-04-30 17:29:01 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicateddatabaseclustergroups_gw7_main_node_1_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:01 [ 458 ] DEBUG : Waiting for ClickHouse start in main_node_1, ip: 172.16.23.7... (cluster.py:3161, start) 2026-04-30 17:29:01 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicateddatabaseclustergroups_gw7_main_node_1_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:01 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:01 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:01 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:01 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:01 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:02 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:02 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:02 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:02 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:02 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:02 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:02 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:03 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:03 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:03 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:03 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:03 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:03 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:03 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:03 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:03 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:04 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:04 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:04 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:04 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:04 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:04 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:04 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:04 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:05 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:05 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:05 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:05 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:05 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:05 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:05 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:05 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:06 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:06 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:06 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:06 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:06 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:06 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:06 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:06 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:06 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:07 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:07 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:07 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:07 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:07 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:07 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:07 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:07 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:08 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:08 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:08 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:08 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:08 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:08 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:08 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:08 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:08 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:09 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:09 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:09 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:09 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:09 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:09 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:09 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:09 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:09 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:10 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:10 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:10 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:10 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:10 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:10 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:10 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:10 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:11 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:11 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:11 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:11 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:11 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:11 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:11 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:12 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:12 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:12 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:12 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:12 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:12 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:12 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:12 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:13 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:13 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:13 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:13 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:13 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:13 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:13 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:13 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:13 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:14 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:14 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:14 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:14 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:14 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:14 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:14 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:14 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:14 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:15 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:15 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:15 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:15 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:15 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:15 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:15 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:15 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:15 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:16 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:16 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:16 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:16 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:16 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:16 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:16 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:16 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:16 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:17 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:17 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:17 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:17 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:17 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:17 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:18 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:18 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:18 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:18 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:18 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:18 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:18 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:18 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:18 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:19 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:19 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:19 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:19 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:19 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:19 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:19 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:19 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:19 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:20 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:20 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:20 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:20 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:20 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:20 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:20 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:20 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:21 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:21 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:21 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:21 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:21 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:21 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:21 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:22 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:22 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:22 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:22 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:22 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:22 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:23 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:23 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:23 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:23 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:23 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:23 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:23 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:23 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:24 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:24 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:24 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:24 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:24 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:24 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:24 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:24 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:25 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:25 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:25 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:25 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:25 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:25 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:25 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:25 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:25 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:26 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:26 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:26 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:26 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:26 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:26 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:26 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:26 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:27 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:27 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:27 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:28 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:28 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:28 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:28 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:28 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:29 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:29 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:29 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:29 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:29 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:29 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:29 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:29 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:29 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:29 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:30 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:30 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:30 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:30 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:30 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:30 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:30 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/d882e2d6b157fa24fdc1cdff21c59547fc975fce9c5b1a5e14bbceb25e0926f8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:30 [ 458 ] DEBUG : ClickHouse main_node_1 started (cluster.py:3165, start) 2026-04-30 17:29:30 [ 458 ] DEBUG : get_instance_ip instance_name=main_node_2 (cluster.py:2135, get_instance_ip) 2026-04-30 17:29:30 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicateddatabaseclustergroups_gw7_main_node_2_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:30 [ 458 ] DEBUG : Waiting for ClickHouse start in main_node_2, ip: 172.16.23.8... (cluster.py:3161, start) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicateddatabaseclustergroups_gw7_main_node_2_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/47f9c854aa32104f3112a98a8d5f31c519e94abb3563145dd67d11f7db5d1bf6/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:31 [ 458 ] DEBUG : ClickHouse main_node_2 started (cluster.py:3165, start) 2026-04-30 17:29:31 [ 458 ] DEBUG : get_instance_ip instance_name=backup_node_1 (cluster.py:2135, get_instance_ip) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:31 [ 458 ] DEBUG : Waiting for ClickHouse start in backup_node_1, ip: 172.16.23.6... (cluster.py:3161, start) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:31 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:32 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:32 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:32 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:32 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:32 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:32 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:32 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:32 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/1e23b265c9a858b80c25f35713479e83d9d1f4360737f46308b471ff0d3e40c4/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:32 [ 458 ] DEBUG : ClickHouse backup_node_1 started (cluster.py:3165, start) 2026-04-30 17:29:32 [ 458 ] DEBUG : get_instance_ip instance_name=backup_node_2 (cluster.py:2135, get_instance_ip) 2026-04-30 17:29:32 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:32 [ 458 ] DEBUG : Waiting for ClickHouse start in backup_node_2, ip: 172.16.23.5... (cluster.py:3161, start) 2026-04-30 17:29:32 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:32 [ 458 ] DEBUG : http://localhost:None "GET /v1.42/containers/3b949c6897e54a86db4aae247a8cf1b72d3c9a86cb80c256bbe9ab537743143d/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2026-04-30 17:29:32 [ 458 ] DEBUG : ClickHouse backup_node_2 started (cluster.py:3165, start) ------------------------------ Captured log call ------------------------------- 2026-04-30 17:29:32 [ 458 ] DEBUG : Executing query CREATE DATABASE cluster_groups ENGINE = Replicated('/test/cluster_groups', '1', '1'); on main_node_1 (cluster.py:3602, query) 2026-04-30 17:29:36 [ 458 ] DEBUG : Executing query CREATE DATABASE cluster_groups ENGINE = Replicated('/test/cluster_groups', '1', '2'); on main_node_2 (cluster.py:3602, query) 2026-04-30 17:29:40 [ 458 ] DEBUG : Executing query CREATE DATABASE cluster_groups ENGINE = Replicated('/test/cluster_groups', '1', '3'); on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:29:43 [ 458 ] DEBUG : Executing query CREATE DATABASE cluster_groups ENGINE = Replicated('/test/cluster_groups', '1', '4'); on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:29:48 [ 458 ] DEBUG : Executing query SELECT host_name from system.clusters WHERE cluster = 'cluster_groups' ORDER BY host_name on main_node_1 (cluster.py:3602, query) 2026-04-30 17:29:51 [ 458 ] DEBUG : Executing query SELECT host_name from system.clusters WHERE cluster = 'cluster_groups' ORDER BY host_name on main_node_2 (cluster.py:3602, query) 2026-04-30 17:29:56 [ 458 ] DEBUG : Executing query SELECT host_name from system.clusters WHERE cluster = 'cluster_groups' ORDER BY host_name on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:30:01 [ 458 ] DEBUG : Executing query SELECT host_name from system.clusters WHERE cluster = 'cluster_groups' ORDER BY host_name on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:30:04 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:04 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:30:06 [ 458 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2026-04-30 17:30:06 [ 458 ] DEBUG : Stdout: 8 ? 00:00:18 clickhouse (cluster.py:121, run_and_check) 2026-04-30 17:30:06 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:06 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', 'pkill clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:30:09 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:09 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:11 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:12 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:12 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:15 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:16 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:16 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:17 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:18 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:18 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:19 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:20 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:20 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:20 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:21 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:21 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:26 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:27 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:27 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:29 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:30 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:30 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:32 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:33 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:33 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:34 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:35 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:35 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:36 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:36 [ 458 ] DEBUG : Stdout:799 (cluster.py:121, run_and_check) 2026-04-30 17:30:37 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:37 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:39 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:39 [ 458 ] DEBUG : Stdout:799 (cluster.py:121, run_and_check) 2026-04-30 17:30:40 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:40 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:41 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps aux'] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:41 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', 'ps aux'] (cluster.py:113, run_and_check) 2026-04-30 17:30:42 [ 458 ] DEBUG : Stdout:USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND (cluster.py:121, run_and_check) 2026-04-30 17:30:42 [ 458 ] DEBUG : Stdout:root 1 0.1 0.0 7372 3292 ? Ss 17:28 0:00 bash -c trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon; coproc tail -f /dev/null; wait $! (cluster.py:121, run_and_check) 2026-04-30 17:30:42 [ 458 ] DEBUG : Stdout:root 11 0.0 0.0 5804 960 ? S 17:29 0:00 tail -f /dev/null (cluster.py:121, run_and_check) 2026-04-30 17:30:42 [ 458 ] DEBUG : Stdout:root 835 0.0 0.0 10072 1552 ? Rs 17:30 0:00 ps aux (cluster.py:121, run_and_check) 2026-04-30 17:30:42 [ 458 ] WARNING : We want force stop clickhouse, but no clickhouse-server is running USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.1 0.0 7372 3292 ? Ss 17:28 0:00 bash -c trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon; coproc tail -f /dev/null; wait $! root 11 0.0 0.0 5804 960 ? S 17:29 0:00 tail -f /dev/null root 835 0.0 0.0 10072 1552 ? Rs 17:30 0:00 ps aux (cluster.py:3942, stop_clickhouse) 2026-04-30 17:30:42 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:42 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:30:43 [ 458 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2026-04-30 17:30:43 [ 458 ] DEBUG : Stdout: 8 ? 00:00:29 clickhouse (cluster.py:121, run_and_check) 2026-04-30 17:30:43 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:43 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', 'pkill clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:30:45 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:45 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:47 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:48 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:48 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:51 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:52 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:52 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:53 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:54 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:54 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:55 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:56 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:56 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:57 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:30:58 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:30:58 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:30:59 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:00 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:00 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:01 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:02 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:02 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:03 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:04 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:04 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:05 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:06 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:06 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:07 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:08 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:08 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:12 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:13 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:13 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:15 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:16 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:16 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:17 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:17 [ 458 ] WARNING : Force kill clickhouse in stop_clickhouse. ps:8 (cluster.py:3926, stop_clickhouse) 2026-04-30 17:31:17 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 8 > /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/logs/stdout.log"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:17 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 8 > /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/logs/stdout.log"] (cluster.py:113, run_and_check) 2026-04-30 17:31:18 [ 458 ] DEBUG : Stderr:bash: line 1: /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/logs/stdout.log: No such file or directory (cluster.py:123, run_and_check) 2026-04-30 17:31:18 [ 458 ] DEBUG : Exitcode:1 (cluster.py:125, run_and_check) 2026-04-30 17:31:18 [ 458 ] WARNING : Stop ClickHouse raised an error Command ['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 8 > /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/logs/stdout.log"] return non-zero code 1: bash: line 1: /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/logs/stdout.log: No such file or directory (cluster.py:3947, stop_clickhouse) 2026-04-30 17:31:18 [ 458 ] DEBUG : Executing query CREATE TABLE cluster_groups.table_1 (d Date, k UInt64) ENGINE=ReplicatedMergeTree ORDER BY k PARTITION BY toYYYYMM(d); on main_node_1 (cluster.py:3602, query) 2026-04-30 17:31:27 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:27 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:31:28 [ 458 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2026-04-30 17:31:28 [ 458 ] DEBUG : Stdout: 8 ? 00:00:47 clickhouse (cluster.py:121, run_and_check) 2026-04-30 17:31:28 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:28 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', 'pkill clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:31:29 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:29 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:31 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:32 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:32 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:33 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:34 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:34 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:38 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:39 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:39 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:42 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:43 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:43 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:44 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:45 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:45 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:46 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:47 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:47 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:48 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:49 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:49 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:53 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:54 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:54 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:55 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:56 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:56 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:31:58 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:31:59 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:31:59 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:32:01 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:32:02 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:32:02 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:32:23 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:32:23 [ 458 ] WARNING : Force kill clickhouse in stop_clickhouse. ps:8 (cluster.py:3926, stop_clickhouse) 2026-04-30 17:32:23 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 8 > /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/logs/stdout.log"] (cluster.py:2173, exec_in_container) 2026-04-30 17:32:23 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 8 > /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/logs/stdout.log"] (cluster.py:113, run_and_check) 2026-04-30 17:32:41 [ 458 ] DEBUG : Stderr:bash: line 1: /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/logs/stdout.log: No such file or directory (cluster.py:123, run_and_check) 2026-04-30 17:32:41 [ 458 ] DEBUG : Exitcode:1 (cluster.py:125, run_and_check) 2026-04-30 17:32:41 [ 458 ] WARNING : Stop ClickHouse raised an error Command ['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "gdb -batch -ex 'thread apply all bt full' -p 8 > /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/logs/stdout.log"] return non-zero code 1: bash: line 1: /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/logs/stdout.log: No such file or directory (cluster.py:3947, stop_clickhouse) 2026-04-30 17:32:41 [ 458 ] DEBUG : Executing query CREATE TABLE cluster_groups.table_2 (d Date, k UInt64) ENGINE=ReplicatedMergeTree ORDER BY k PARTITION BY toYYYYMM(d); on main_node_1 (cluster.py:3682, query_and_get_error) 2026-04-30 17:32:52 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:32:52 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:32:53 [ 458 ] DEBUG : No clickhouse process running. Start new one. (cluster.py:3964, start_clickhouse) 2026-04-30 17:32:53 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:2173, exec_in_container) 2026-04-30 17:32:53 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', '0', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:113, run_and_check) 2026-04-30 17:32:57 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:32:57 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:32:59 [ 458 ] DEBUG : Stdout:856 (cluster.py:121, run_and_check) 2026-04-30 17:32:59 [ 458 ] DEBUG : Clickhouse process running. (cluster.py:3975, start_clickhouse) 2026-04-30 17:32:59 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:32:59 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:32:59 [ 458 ] DEBUG : Stdout:856 (cluster.py:121, run_and_check) 2026-04-30 17:32:59 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:02 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:03 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:05 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:07 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:10 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:11 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:13 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:14 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:16 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:18 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:33:18 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:33:20 [ 458 ] DEBUG : Stdout:856 (cluster.py:121, run_and_check) 2026-04-30 17:33:20 [ 458 ] WARNING : ERROR Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.23.6:9000). (NETWORK_ERROR) (cluster.py:4008, wait_start) 2026-04-30 17:33:20 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:33:20 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:33:22 [ 458 ] DEBUG : Stdout:856 (cluster.py:121, run_and_check) 2026-04-30 17:33:22 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:23 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:25 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:26 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:29 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:30 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:32 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:33 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:35 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:37 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:39 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:33:39 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:33:40 [ 458 ] DEBUG : Stdout:856 (cluster.py:121, run_and_check) 2026-04-30 17:33:40 [ 458 ] WARNING : ERROR Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.23.6:9000). (NETWORK_ERROR) (cluster.py:4008, wait_start) 2026-04-30 17:33:40 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:33:40 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:33:42 [ 458 ] DEBUG : Stdout:856 (cluster.py:121, run_and_check) 2026-04-30 17:33:42 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:44 [ 458 ] DEBUG : Executing query select 20 on backup_node_1 (cluster.py:3602, query) 2026-04-30 17:33:48 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:33:48 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:33:50 [ 458 ] DEBUG : No clickhouse process running. Start new one. (cluster.py:3964, start_clickhouse) 2026-04-30 17:33:50 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:2173, exec_in_container) 2026-04-30 17:33:50 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', '0', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:113, run_and_check) 2026-04-30 17:33:55 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:33:55 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:33:57 [ 458 ] DEBUG : Stdout:871 (cluster.py:121, run_and_check) 2026-04-30 17:33:57 [ 458 ] DEBUG : Clickhouse process running. (cluster.py:3975, start_clickhouse) 2026-04-30 17:33:57 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:33:57 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:34:00 [ 458 ] DEBUG : Stdout:871 (cluster.py:121, run_and_check) 2026-04-30 17:34:00 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:01 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:03 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:04 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:06 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:07 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:08 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:10 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:12 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:14 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:15 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:34:15 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:34:16 [ 458 ] DEBUG : Stdout:871 (cluster.py:121, run_and_check) 2026-04-30 17:34:16 [ 458 ] WARNING : ERROR Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.23.5:9000). (NETWORK_ERROR) (cluster.py:4008, wait_start) 2026-04-30 17:34:16 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:34:16 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:34:18 [ 458 ] DEBUG : Stdout:871 (cluster.py:121, run_and_check) 2026-04-30 17:34:18 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:20 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:22 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:23 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:26 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:28 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:31 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:32 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:34 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:35 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:36 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:34:36 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:34:39 [ 458 ] DEBUG : Stdout:871 (cluster.py:121, run_and_check) 2026-04-30 17:34:39 [ 458 ] WARNING : ERROR Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.23.5:9000). (NETWORK_ERROR) (cluster.py:4008, wait_start) 2026-04-30 17:34:39 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:34:39 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:34:41 [ 458 ] DEBUG : Stdout:871 (cluster.py:121, run_and_check) 2026-04-30 17:34:41 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:43 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:44 [ 458 ] DEBUG : Executing query select 20 on backup_node_2 (cluster.py:3602, query) 2026-04-30 17:34:50 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:34:50 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:35:22 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:35:22 [ 458 ] DEBUG : Clickhouse process running. (cluster.py:3975, start_clickhouse) 2026-04-30 17:35:22 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:35:22 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:35:48 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:35:48 [ 458 ] DEBUG : Executing query select 20 on main_node_2 (cluster.py:3602, query) 2026-04-30 17:35:50 [ 458 ] DEBUG : Executing query select 20 on main_node_2 (cluster.py:3602, query) 2026-04-30 17:35:52 [ 458 ] DEBUG : Executing query select 20 on main_node_2 (cluster.py:3602, query) 2026-04-30 17:35:53 [ 458 ] DEBUG : Executing query select 20 on main_node_2 (cluster.py:3602, query) 2026-04-30 17:35:55 [ 458 ] DEBUG : Executing query select 20 on main_node_2 (cluster.py:3602, query) 2026-04-30 17:35:57 [ 458 ] DEBUG : Executing query select 20 on main_node_2 (cluster.py:3602, query) 2026-04-30 17:35:59 [ 458 ] DEBUG : Executing query select 20 on main_node_2 (cluster.py:3602, query) 2026-04-30 17:36:00 [ 458 ] DEBUG : Executing query select 20 on main_node_2 (cluster.py:3602, query) 2026-04-30 17:36:01 [ 458 ] DEBUG : Executing query select 20 on main_node_2 (cluster.py:3602, query) 2026-04-30 17:36:03 [ 458 ] DEBUG : Executing query select 20 on main_node_2 (cluster.py:3602, query) 2026-04-30 17:36:04 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:36:04 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:36:05 [ 458 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2026-04-30 17:36:05 [ 458 ] DEBUG : Stdout:871 (cluster.py:121, run_and_check) 2026-04-30 17:36:05 [ 458 ] WARNING : ERROR Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Connection refused (172.16.23.8:9000). (NETWORK_ERROR) (cluster.py:4008, wait_start) 2026-04-30 17:36:05 [ 458 ] ERROR : No time left to start. But process is still running. Will dump threads. (cluster.py:4013, wait_start) 2026-04-30 17:36:05 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2173, exec_in_container) 2026-04-30 17:36:05 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2026-04-30 17:36:06 [ 458 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2026-04-30 17:36:06 [ 458 ] DEBUG : Stdout: 8 ? 00:06:12 clickhouse (cluster.py:121, run_and_check) 2026-04-30 17:36:06 [ 458 ] INFO : PS RESULT: PID TTY TIME CMD 8 ? 00:06:12 clickhouse (cluster.py:4019, wait_start) 2026-04-30 17:36:06 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2173, exec_in_container) 2026-04-30 17:36:06 [ 458 ] DEBUG : Command:['docker', 'exec', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2026-04-30 17:36:09 [ 458 ] WARNING : Current start attempt failed. Will kill 8 just in case. (cluster.py:3982, start_clickhouse) 2026-04-30 17:36:09 [ 458 ] DEBUG : run container_id:roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 detach:False nothrow:True cmd: ['bash', '-c', 'kill -9 8'] (cluster.py:2173, exec_in_container) 2026-04-30 17:36:09 [ 458 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestreplicateddatabaseclustergroups_gw7_main_node_2_1', 'bash', '-c', 'kill -9 8'] (cluster.py:113, run_and_check) 2026-04-30 17:36:14 [ 458 ] DEBUG : Stderr:bash: line 1: kill: (8) - No such process (cluster.py:123, run_and_check) 2026-04-30 17:36:14 [ 458 ] DEBUG : Exitcode:1 (cluster.py:125, run_and_check) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:36:17 [ 458 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/.env', '--project-name', 'roottestreplicateddatabaseclustergroups_gw7', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/docker-compose.yml', 'stop', '--timeout', '20'] (cluster.py:113, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_main_node_1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_zoo1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_zoo3_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_zoo2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_main_node_1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_zoo2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_zoo3_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Stderr:Stopping roottestreplicateddatabaseclustergroups_gw7_zoo1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:36:33 [ 458 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/.env', '--project-name', 'roottestreplicateddatabaseclustergroups_gw7', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/main_node_2/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_replicated_database_cluster_groups/_instances_0_gw7/backup_node_2/docker-compose.yml', 'down', '--volumes'] (cluster.py:113, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_main_node_1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_zoo1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_zoo3_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_zoo2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_main_node_1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_backup_node_1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_zoo3_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_zoo1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_backup_node_2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_main_node_2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing roottestreplicateddatabaseclustergroups_gw7_zoo2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Stderr:Removing network roottestreplicateddatabaseclustergroups_gw7_default (cluster.py:123, run_and_check) 2026-04-30 17:36:38 [ 458 ] DEBUG : Cleanup called (cluster.py:876, cleanup) 2026-04-30 17:36:39 [ 458 ] DEBUG : Docker networks for project roottestreplicateddatabaseclustergroups_gw7 are NETWORK ID NAME DRIVER SCOPE (cluster.py:855, print_all_docker_pieces) 2026-04-30 17:36:39 [ 458 ] DEBUG : Docker containers for project roottestreplicateddatabaseclustergroups_gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:863, print_all_docker_pieces) 2026-04-30 17:36:39 [ 458 ] DEBUG : Docker volumes for project roottestreplicateddatabaseclustergroups_gw7 are DRIVER VOLUME NAME (cluster.py:871, print_all_docker_pieces) 2026-04-30 17:36:39 [ 458 ] DEBUG : Command:docker container list --all --filter name='^/roottestreplicateddatabaseclustergroups_gw7_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2026-04-30 17:36:39 [ 458 ] DEBUG : Unstopped containers: {} (cluster.py:890, cleanup) 2026-04-30 17:36:39 [ 458 ] DEBUG : No running containers for project: roottestreplicateddatabaseclustergroups_gw7 (cluster.py:904, cleanup) 2026-04-30 17:36:39 [ 458 ] DEBUG : Trying to prune unused networks... (cluster.py:910, cleanup) 2026-04-30 17:36:39 [ 458 ] DEBUG : Trying to prune unused images... (cluster.py:926, cleanup) 2026-04-30 17:36:39 [ 458 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2026-04-30 17:36:39 [ 458 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:121, run_and_check) 2026-04-30 17:36:39 [ 458 ] DEBUG : Images pruned (cluster.py:929, cleanup) 2026-04-30 17:36:39 [ 458 ] DEBUG : Trying to prune unused volumes... (cluster.py:935, cleanup) 2026-04-30 17:36:39 [ 458 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2026-04-30 17:36:40 [ 458 ] DEBUG : Stdout:1 (cluster.py:121, run_and_check) _____________________ test_polymorphic_parts_non_adaptive ______________________ [gw3] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_polymorphic_parts_non_adaptive(start_cluster): node1.query("SYSTEM STOP MERGES") node2.query("SYSTEM STOP MERGES") insert_random_data("non_adaptive_table", node1, 100) > node2.query("SYSTEM SYNC REPLICA non_adaptive_table", timeout=20) test_polymorphic_parts/test.py:427: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3603: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 242, stderr: Received exception from server (version 24.8.14): E Code: 242. DB::Exception: Received from 172.16.10.16:9000. DB::Exception: Table is in readonly mode (replica path: /clickhouse/tables/test/shard1/non_adaptive_table/replicas/1). Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x00000000343c5254 E 1. ./build_docker/./src/Common/Exception.cpp:111: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001adb62c9 E 2. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000aa94445 E 3. DB::Exception::Exception(int, FormatStringHelperImpl::type>, String const&) @ 0x000000000aac13f4 E 4. ./build_docker/./src/Storages/StorageReplicatedMergeTree.cpp:0: DB::StorageReplicatedMergeTree::assertNotReadonly() const @ 0x000000002ba75de3 E 5. ./build_docker/./src/Storages/StorageReplicatedMergeTree.cpp:0: DB::StorageReplicatedMergeTree::waitForProcessingQueue(unsigned long, DB::SyncReplicaMode, std::unordered_set, std::equal_to, std::allocator>) @ 0x000000002bc3dceb E 6. ./build_docker/./src/Interpreters/InterpreterSystemQuery.cpp:1141: DB::InterpreterSystemQuery::syncReplica(DB::ASTSystemQuery&) @ 0x0000000029c0de68 E 7. ./build_docker/./src/Interpreters/InterpreterSystemQuery.cpp:691: DB::InterpreterSystemQuery::execute() @ 0x0000000029bfae3e E 8. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000029b9507c E 9. ./build_docker/./src/Interpreters/executeQuery.cpp:1397: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000029b8e405 E 10. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000002d1e4cec E 11. ./build_docker/./src/Server/TCPHandler.cpp:2527: DB::TCPHandler::run() @ 0x000000002d218c00 E 12. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x00000000345a29ef E 13. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x00000000345a35d7 E 14. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:219: Poco::PooledThread::run() @ 0x00000000344a5ceb E 15. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003449fe48 E 16. asan_thread_start(void*) @ 0x000000000aa49059 E 17. ? @ 0x00007fe25a744ac3 E 18. ? @ 0x00007fe25a7d6850 E . (TABLE_IS_READ_ONLY) E (query: SYSTEM SYNC REPLICA non_adaptive_table) helpers/client.py:239: QueryRuntimeException ------------------------------ Captured log call ------------------------------- 2026-04-30 17:39:29 [ 416 ] DEBUG : Executing query SYSTEM STOP MERGES on node1 (cluster.py:3602, query) 2026-04-30 17:39:30 [ 416 ] DEBUG : Executing query SYSTEM STOP MERGES on node2 (cluster.py:3602, query) 2026-04-30 17:39:31 [ 416 ] DEBUG : Executing query INSERT INTO non_adaptive_table VALUES ('2019-10-11',0,'7J5Z7CQZ6N8BNH4HOYAM95R3M18MUJI3RHJSO5SCA75GG9CEYFLT9VZNSE74SE93E9DZH7TBLFGLYE2L8CGLS6MHNOTIK01XVSVBY5EH0SXZWFZY8SIOTIWO6IF2ALCP6MK3AU9EDSNS01UY8253EYRAYD8PS41FKSBNHM3ND0K3FFEJI94DM3CQGZC31SDVFB5LY8TF538WJ8VCKG6YS72JN3HI62D8WRQOBKT62KVHSKCVVY6B0HL96WWZUD0DQBEWHMZ7MA8G7QP7SGO0IY2L0X8MB277DV5MO1QCYNE3R5Z5LUX48CEQCZ19PY63NEEYUH59JL6UG1K1IMU222WJ8158NV95BE2GV45P56153DVU3PTWRD95DYOSYR233TB4Y9QKPXDS6VHB8VKVNZSZLRGBSRTRVQCCBP4XF1EEP5H3D45PDX9LXDU9IDZ8HXHOLDXBB53GXJ0M7JZAJYO2G7A5GNK6EROF2IZTQ96D6K0NELKFK274PNUM7TW3BANNTZDJ6U37E67PX61PXE6T461A596IVOOCFBKS6YU20JWOCAI1CWLKJFEG1DFV4WY7FXVY3CY8P17OICTT69NFU2WYGA4TG34QS0Y9VVNRJYQ1GDF8DZZM1UED5WF2P6W8UABXS2DWGDP45JSHGB3OW50RII6NT86D53YP41PTOYHTY02U4OCB443DI3OCHKU25L25D5JJXUGS78UFGASG14RHQ9RFFPKKCDXF2SG01LZ3NFKK27FX8JQ1MTLVO0L836KJUFA02SZIYIG9DL04EJD',[121, 883, 908, 71, 723, 549, 574, 883, 784, 387, 461, 577, 868, 278, 703, 741, 116, 476, 118, 906, 291, 986, 254, 70, 97, 994, 539, 6, 212, 404, 59 on node1 (cluster.py:3602, query) 2026-04-30 17:39:32 [ 416 ] DEBUG : Executing query SYSTEM SYNC REPLICA non_adaptive_table on node2 (cluster.py:3602, query) ---------------------------- Captured log teardown ----------------------------- 2026-04-30 17:39:38 [ 416 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/.env', '--project-name', 'roottestpolymorphicparts_gw3', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node2/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node3/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node4/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node5/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node6/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node9/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node10/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node11/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node12/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node7/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node8/docker-compose.yml', 'stop', '--timeout', '20'] (cluster.py:113, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node10_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node9_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node12_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node6_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node8_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node3_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node11_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node7_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node4_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node5_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_zoo2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_zoo1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_zoo3_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node7_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node9_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node11_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node12_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node8_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node10_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node3_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node4_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node6_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_node5_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_zoo2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_zoo1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Stderr:Stopping roottestpolymorphicparts_gw3_zoo3_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node3/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node3/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node4/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node4/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:40:10 [ 416 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node5/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node5/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:40:11 [ 416 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node6/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node6/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:40:11 [ 416 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node9/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node9/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:40:11 [ 416 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node10/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node10/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:40:11 [ 416 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node11/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node11/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:40:11 [ 416 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node12/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node12/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:40:11 [ 416 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node7/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node7/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:40:11 [ 416 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node8/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node8/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true'] (cluster.py:113, run_and_check) 2026-04-30 17:40:11 [ 416 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/.env', '--project-name', 'roottestpolymorphicparts_gw3', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node2/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node3/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node4/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node5/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node6/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node9/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node10/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node11/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node12/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node7/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_polymorphic_parts/_instances_0_gw3/node8/docker-compose.yml', 'down', '--volumes'] (cluster.py:113, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node10_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node9_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node12_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node6_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node8_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node3_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node11_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node7_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node4_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node5_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_zoo2_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_zoo1_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_zoo3_1 ... (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node4_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node10_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_zoo1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node11_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node7_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node3_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node8_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node5_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node12_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node6_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node9_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_zoo3_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_node1_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing roottestpolymorphicparts_gw3_zoo2_1 ... done (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stderr:Removing network roottestpolymorphicparts_gw3_default (cluster.py:123, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Cleanup called (cluster.py:876, cleanup) 2026-04-30 17:40:16 [ 416 ] DEBUG : Docker networks for project roottestpolymorphicparts_gw3 are NETWORK ID NAME DRIVER SCOPE (cluster.py:855, print_all_docker_pieces) 2026-04-30 17:40:16 [ 416 ] DEBUG : Docker containers for project roottestpolymorphicparts_gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:863, print_all_docker_pieces) 2026-04-30 17:40:16 [ 416 ] DEBUG : Docker volumes for project roottestpolymorphicparts_gw3 are DRIVER VOLUME NAME (cluster.py:871, print_all_docker_pieces) 2026-04-30 17:40:16 [ 416 ] DEBUG : Command:docker container list --all --filter name='^/roottestpolymorphicparts_gw3_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Unstopped containers: {} (cluster.py:890, cleanup) 2026-04-30 17:40:16 [ 416 ] DEBUG : No running containers for project: roottestpolymorphicparts_gw3 (cluster.py:904, cleanup) 2026-04-30 17:40:16 [ 416 ] DEBUG : Trying to prune unused networks... (cluster.py:910, cleanup) 2026-04-30 17:40:16 [ 416 ] DEBUG : Trying to prune unused images... (cluster.py:926, cleanup) 2026-04-30 17:40:16 [ 416 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:121, run_and_check) 2026-04-30 17:40:16 [ 416 ] DEBUG : Images pruned (cluster.py:929, cleanup) 2026-04-30 17:40:16 [ 416 ] DEBUG : Trying to prune unused volumes... (cluster.py:935, cleanup) 2026-04-30 17:40:16 [ 416 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2026-04-30 17:40:17 [ 416 ] DEBUG : Stdout:1 (cluster.py:121, run_and_check) ============================== slowest durations =============================== 670.85s call test_non_default_compression/test.py::test_preconfigured_default_codec 562.73s call test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node0-second_node0] 547.05s call test_rename_column/test.py::test_rename_distributed_parallel_insert_and_select 496.14s call test_non_default_compression/test.py::test_preconfigured_deflateqpl_codec 477.40s call test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_1 423.64s call test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-100000-SELECT sum(key) FROM {table_name}] 402.12s call test_replicated_database_cluster_groups/test.py::test_cluster_groups 370.13s call test_rename_column/test.py::test_rename_with_parallel_insert 258.28s setup test_polymorphic_parts/test.py::test_compact_parts_only 250.90s call test_postgresql_replica_database_engine_2/test.py::test_add_new_table_to_replication 242.30s call test_rename_column/test.py::test_rename_distributed 241.66s call test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node1-second_node1] 227.41s call test_search_orphaned_parts/test.py::test_search_orphaned_parts[True] 218.12s call test_profile_events_s3/test.py::test_profile_events 210.47s call test_rename_column/test.py::test_rename_with_parallel_merges 161.39s setup test_profile_events_s3/test.py::test_profile_events 153.25s setup test_parallel_replicas_failover/test.py::test_skip_replicas_without_table 146.46s setup test_replicated_database_cluster_groups/test.py::test_cluster_groups 139.39s call test_polymorphic_parts/test.py::test_compact_parts_only 128.54s call test_search_orphaned_parts/test.py::test_search_orphaned_parts[False] 125.26s setup test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_no_proxy 121.45s setup test_rename_column/test.py::test_rename_distributed 121.21s call test_postgresql_replica_database_engine_2/test.py::test_too_many_parts 120.59s setup test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-1000-SELECT sum(key) FROM {table_name}] 118.10s call test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-1000-SELECT sum(key) FROM {table_name}] 117.17s call test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop 112.58s setup test_mysql_database_engine/test.py::test_mysql_types[float_2] 111.68s call test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-1000-SELECT sum(key) FROM {table_name}] 109.31s setup test_non_default_compression/test.py::test_preconfigured_custom_codec 107.18s call test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-10000-SELECT sum(key) FROM {table_name}] 100.07s call test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-10000-SELECT sum(key) FROM {table_name}] 97.86s call test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-100000-SELECT sum(key) FROM {table_name}] 94.55s call test_rename_column/test.py::test_rename_with_parallel_slow_insert 92.02s setup test_select_access_rights/test_from_system_tables.py::test_information_schema 87.25s setup test_parallel_replicas_skip_shards/test.py::test_error_on_unavailable_shards 85.93s setup test_remote_prewhere/test.py::test_remote 81.62s call test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-100000-SELECT sum(key) FROM {table_name}] 80.46s setup test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable 78.23s setup test_replicated_table_attach/test.py::test_startup_with_small_bg_pool 77.92s setup test_optimize_on_insert/test.py::test_empty_parts_optimize 74.93s setup test_range_hashed_dictionary_types/test.py::test_range_hashed_dict 73.52s setup test_postgresql_replica_database_engine_2/test.py::test_add_new_table_to_replication 69.50s setup test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_client_suggestions_connection 68.43s setup test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop 68.00s call test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable 67.73s call test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-1000-SELECT sum(key) FROM {table_name}] 64.97s call test_rename_column/test.py::test_rename_with_parallel_select 64.07s setup test_postgresql_database_engine/test.py::test_datetime 63.38s call test_parallel_replicas_failover/test.py::test_skip_replicas_without_table 63.36s setup test_s3_low_cardinality_right_border/test.py::test_s3_right_border 63.28s call test_rename_column/test.py::test_rename_with_parallel_ttl_delete 62.50s call test_rename_column/test.py::test_rename_with_parallel_ttl_move 59.50s call test_select_access_rights/test_from_system_tables.py::test_information_schema 58.36s call test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-10000-SELECT sum(key) FROM {table_name}] 58.28s call test_rename_column/test.py::test_rename_parallel 57.89s setup test_old_versions/test.py::test_client_is_older_than_server 47.09s teardown test_profile_events_s3/test.py::test_profile_events 46.95s call test_postgresql_database_engine/test.py::test_predefined_connection_configuration 42.25s call test_replicated_table_attach/test.py::test_startup_with_small_bg_pool 41.57s teardown test_polymorphic_parts/test.py::test_polymorphic_parts_non_adaptive 41.52s teardown test_parallel_replicas_failover/test.py::test_skip_unresponsive_replicas 41.31s call test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy 40.13s teardown test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy_no_proxy 39.39s teardown test_postgresql_database_engine/test.py::test_predefined_connection_configuration 37.47s teardown test_select_access_rights/test_from_system_tables.py::test_information_schema 37.45s call test_rocksdb_read_only/test.py::test_read_only 36.37s teardown test_parallel_replicas_skip_shards/test.py::test_skip_unavailable_shards 34.75s teardown test_rename_column/test.py::test_rename_with_parallel_ttl_move 33.72s teardown test_s3_low_cardinality_right_border/test.py::test_s3_right_border_3 33.48s call test_rename_column/test.py::test_rename_parallel_same_node 33.14s teardown test_range_hashed_dictionary_types/test.py::test_range_hashed_dict 31.26s teardown test_remote_prewhere/test.py::test_remote 30.60s call test_parallel_replicas_failover/test.py::test_skip_unresponsive_replicas 30.43s teardown test_optimize_on_insert/test.py::test_empty_parts_optimize 29.95s call test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list_no_proxy 29.61s call test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-1000-SELECT sum(key) FROM {table_name}] 28.89s call test_non_default_compression/test.py::test_preconfigured_custom_codec 28.49s call test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list 27.90s call test_mysql_database_engine/test.py::test_predefined_connection_configuration 26.31s call test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-10000-SELECT sum(key) FROM {table_name}] 26.03s call test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_compact-Compact] 25.71s call test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables 25.43s call test_postgresql_database_engine/test.py::test_postgresql_password_leak 25.00s call test_postgresql_database_engine/test.py::test_postgresql_database_with_schema 24.30s teardown test_replicated_database_cluster_groups/test.py::test_cluster_groups 22.87s teardown test_replicated_table_attach/test.py::test_startup_with_small_bg_pool_partitioned 22.77s call test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache 22.69s teardown test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_client_suggestions_connection 22.31s teardown test_non_default_compression/test.py::test_uncompressed_cache_plus_zstd_codec 22.19s teardown test_postgresql_replica_database_engine_2/test.py::test_too_many_parts 22.18s teardown test_mysql_database_engine/test.py::test_predefined_connection_configuration 21.93s call test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy_no_proxy 21.91s call test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-100000-SELECT sum(key) FROM {table_name}] 21.79s teardown test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-100000-SELECT sum(key) FROM {table_name}] 19.93s call test_s3_low_cardinality_right_border/test.py::test_s3_right_border 19.82s call test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl 19.50s teardown test_rocksdb_read_only/test.py::test_read_only 18.87s call test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_proxy 18.71s call test_polymorphic_parts/test.py::test_polymorphic_parts_index 18.33s teardown test_old_versions/test.py::test_server_is_older_than_client 18.00s call test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_no_proxy 17.79s call test_mysql_database_engine/test.py::test_mysql_types[timestamp_default] 17.56s call test_optimize_on_insert/test.py::test_empty_parts_optimize 16.41s call test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_wide-Wide] 16.26s teardown test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable 14.98s call test_parallel_replicas_skip_shards/test.py::test_error_on_unavailable_shards 14.38s call test_mysql_database_engine/test.py::test_mysql_types[timestamp_6] 14.24s call test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_client_suggestions_connection 13.54s call test_mysql_database_engine/test.py::test_mysql_types[float_2] 12.04s call test_range_hashed_dictionary_types/test.py::test_range_hashed_dict 11.87s call test_postgresql_replica_database_engine_2/test.py::test_bad_connection_options 9.80s call test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl 8.96s teardown test_postgresql_replica_database_engine_2/test.py::test_add_new_table_to_replication 8.37s call test_postgresql_database_engine/test.py::test_postgresql_fetch_tables 8.26s call test_postgresql_replica_database_engine_2/test.py::test_replica_consumer 7.88s call test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries 7.76s call test_s3_low_cardinality_right_border/test.py::test_s3_right_border_3 7.60s call test_parallel_replicas_skip_shards/test.py::test_skip_unavailable_shards 7.41s call test_s3_low_cardinality_right_border/test.py::test_s3_right_border_2 6.42s call test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays 5.39s call test_remote_prewhere/test.py::test_remote 5.02s call test_postgresql_database_engine/test.py::test_datetime 4.82s call test_polymorphic_parts/test.py::test_polymorphic_parts_non_adaptive 4.27s call test_postgresql_database_engine/test.py::test_postgres_database_old_syntax 3.46s teardown test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_1 3.44s teardown test_postgresql_replica_database_engine_2/test.py::test_quoting_publication 3.44s call test_postgresql_replica_database_engine_2/test.py::test_quoting_publication 3.38s call test_old_versions/test.py::test_server_is_older_than_client 3.34s teardown test_postgresql_replica_database_engine_2/test.py::test_predefined_connection_configuration 3.16s teardown test_postgresql_replica_database_engine_2/test.py::test_failed_load_from_snapshot 3.14s teardown test_postgresql_replica_database_engine_2/test.py::test_remove_table_from_replication 3.02s call test_mysql_database_engine/test.py::test_password_leak 2.97s teardown test_postgresql_replica_database_engine_2/test.py::test_symbols_in_publication_name 2.94s teardown test_postgresql_replica_database_engine_2/test.py::test_toast 2.88s call test_non_default_compression/test.py::test_uncompressed_cache_custom_codec 2.80s teardown test_postgresql_replica_database_engine_2/test.py::test_default_columns 2.77s teardown test_postgresql_replica_database_engine_2/test.py::test_dependent_loading 2.76s teardown test_postgresql_replica_database_engine_2/test.py::test_table_override 2.70s teardown test_postgresql_replica_database_engine_2/test.py::test_bad_connection_options 2.68s teardown test_postgresql_replica_database_engine_2/test.py::test_replica_consumer 2.68s teardown test_postgresql_replica_database_engine_2/test.py::test_generated_columns 2.56s call test_replicated_table_attach/test.py::test_startup_with_small_bg_pool_partitioned 2.36s teardown test_postgresql_replica_database_engine_2/test.py::test_materialized_view 2.23s call test_old_versions/test.py::test_distributed_query_initiator_is_older_than_shard 2.22s call test_non_default_compression/test.py::test_uncompressed_cache_plus_zstd_codec 2.13s teardown test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_2 2.02s call test_old_versions/test.py::test_client_is_older_than_server 2.01s call test_postgresql_replica_database_engine_2/test.py::test_table_override 1.95s teardown test_postgresql_replica_database_engine_2/test.py::test_database_with_single_non_default_schema 1.58s call test_postgresql_replica_database_engine_2/test.py::test_remove_table_from_replication 1.56s teardown test_postgresql_replica_database_engine_2/test.py::test_generated_columns_with_sequence 1.45s call test_postgresql_replica_database_engine_2/test.py::test_default_columns 1.27s call test_postgresql_replica_database_engine_2/test.py::test_symbols_in_publication_name 1.20s call test_postgresql_replica_database_engine_2/test.py::test_failed_load_from_snapshot 1.03s call test_postgresql_replica_database_engine_2/test.py::test_toast 0.98s call test_postgresql_replica_database_engine_2/test.py::test_materialized_view 0.96s call test_postgresql_replica_database_engine_2/test.py::test_generated_columns 0.92s teardown test_rename_column/test.py::test_rename_parallel 0.87s call test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_2 0.82s call test_postgresql_replica_database_engine_2/test.py::test_generated_columns_with_sequence 0.75s call test_postgresql_replica_database_engine_2/test.py::test_dependent_loading 0.68s call test_postgresql_replica_database_engine_2/test.py::test_predefined_connection_configuration 0.61s call test_postgresql_replica_database_engine_2/test.py::test_database_with_single_non_default_schema 0.06s teardown test_replicated_table_attach/test.py::test_startup_with_small_bg_pool 0.05s setup test_parallel_replicas_skip_shards/test.py::test_skip_unavailable_shards 0.05s teardown test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-1000-SELECT sum(key) FROM {table_name}] 0.04s teardown test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy 0.04s setup test_postgresql_replica_database_engine_2/test.py::test_generated_columns 0.03s teardown test_polymorphic_parts/test.py::test_compact_parts_only 0.03s teardown test_parallel_replicas_failover/test.py::test_skip_replicas_without_table 0.03s teardown test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_no_proxy 0.02s teardown test_parallel_replicas_skip_shards/test.py::test_error_on_unavailable_shards 0.02s teardown test_postgresql_database_engine/test.py::test_datetime 0.02s teardown test_mysql_database_engine/test.py::test_mysql_types[float_2] 0.02s teardown test_search_orphaned_parts/test.py::test_search_orphaned_parts[False] 0.02s teardown test_search_orphaned_parts/test.py::test_search_orphaned_parts[True] 0.01s setup test_postgresql_replica_database_engine_2/test.py::test_quoting_publication 0.01s setup test_non_default_compression/test.py::test_preconfigured_deflateqpl_codec 0.01s setup test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-100000-SELECT sum(key) FROM {table_name}] 0.01s teardown test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-100000-SELECT sum(key) FROM {table_name}] 0.01s setup test_replicated_table_attach/test.py::test_startup_with_small_bg_pool_partitioned 0.01s setup test_postgresql_replica_database_engine_2/test.py::test_failed_load_from_snapshot 0.01s setup test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-100000-SELECT sum(key) FROM {table_name}] 0.01s teardown test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-10000-SELECT sum(key) FROM {table_name}] 0.01s setup test_rename_column/test.py::test_rename_with_parallel_ttl_move 0.01s setup test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables 0.01s teardown test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop 0.01s teardown test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-100000-SELECT sum(key) FROM {table_name}] 0.01s setup test_mysql_database_engine/test.py::test_mysql_types[timestamp_6] 0.01s setup test_search_orphaned_parts/test.py::test_search_orphaned_parts[False] 0.01s teardown test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node1-second_node1] 0.01s setup test_rename_column/test.py::test_rename_with_parallel_select 0.01s setup test_s3_low_cardinality_right_border/test.py::test_s3_right_border_2 0.01s setup test_postgresql_database_engine/test.py::test_predefined_connection_configuration 0.01s teardown test_postgresql_database_engine/test.py::test_postgresql_password_leak 0.01s setup test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node0-second_node0] 0.01s setup test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-1000-SELECT sum(key) FROM {table_name}] 0.01s teardown test_rename_column/test.py::test_rename_distributed 0.01s teardown test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-10000-SELECT sum(key) FROM {table_name}] 0.01s setup test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-10000-SELECT sum(key) FROM {table_name}] 0.01s setup test_search_orphaned_parts/test.py::test_search_orphaned_parts[True] 0.01s setup test_postgresql_replica_database_engine_2/test.py::test_bad_connection_options 0.01s teardown test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-1000-SELECT sum(key) FROM {table_name}] 0.01s setup test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_wide-Wide] 0.01s setup test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_compact-Compact] 0.01s setup test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-1000-SELECT sum(key) FROM {table_name}] 0.01s setup test_rename_column/test.py::test_rename_parallel_same_node 0.01s teardown test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays 0.01s teardown test_non_default_compression/test.py::test_preconfigured_default_codec 0.01s setup test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-10000-SELECT sum(key) FROM {table_name}] 0.01s teardown test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_compact-Compact] 0.01s setup test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-100000-SELECT sum(key) FROM {table_name}] 0.01s setup test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_proxy 0.01s teardown test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-100000-SELECT sum(key) FROM {table_name}] 0.01s setup test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-10000-SELECT sum(key) FROM {table_name}] 0.01s setup test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-1000-SELECT sum(key) FROM {table_name}] 0.01s teardown test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-10000-SELECT sum(key) FROM {table_name}] 0.01s teardown test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-1000-SELECT sum(key) FROM {table_name}] 0.00s setup test_postgresql_database_engine/test.py::test_postgresql_fetch_tables 0.00s setup test_rename_column/test.py::test_rename_with_parallel_merges 0.00s setup test_parallel_replicas_failover/test.py::test_skip_unresponsive_replicas 0.00s setup test_non_default_compression/test.py::test_uncompressed_cache_custom_codec 0.00s setup test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy_no_proxy 0.00s setup test_rocksdb_read_only/test.py::test_read_only 0.00s setup test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_table_override 0.00s teardown test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_wide-Wide] 0.00s teardown test_mysql_database_engine/test.py::test_password_leak 0.00s teardown test_rename_column/test.py::test_rename_with_parallel_insert 0.00s teardown test_non_default_compression/test.py::test_preconfigured_deflateqpl_codec 0.00s setup test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node1-second_node1] 0.00s teardown test_s3_low_cardinality_right_border/test.py::test_s3_right_border 0.00s setup test_polymorphic_parts/test.py::test_polymorphic_parts_index 0.00s setup test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy 0.00s teardown test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node0-second_node0] 0.00s teardown test_postgresql_database_engine/test.py::test_postgresql_database_with_schema 0.00s setup test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl 0.00s setup test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list_no_proxy 0.00s setup test_mysql_database_engine/test.py::test_mysql_types[timestamp_default] 0.00s setup test_rename_column/test.py::test_rename_distributed_parallel_insert_and_select 0.00s teardown test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list_no_proxy 0.00s teardown test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-10000-SELECT sum(key) FROM {table_name}] 0.00s teardown test_rename_column/test.py::test_rename_parallel_same_node 0.00s teardown test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache 0.00s setup test_old_versions/test.py::test_server_is_older_than_client 0.00s setup test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list 0.00s teardown test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list 0.00s teardown test_non_default_compression/test.py::test_preconfigured_custom_codec 0.00s setup test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_remove_table_from_replication 0.00s setup test_postgresql_database_engine/test.py::test_postgresql_password_leak 0.00s teardown test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_proxy 0.00s teardown test_rename_column/test.py::test_rename_with_parallel_select 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_symbols_in_publication_name 0.00s setup test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-10000-SELECT sum(key) FROM {table_name}] 0.00s setup test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-100000-SELECT sum(key) FROM {table_name}] 0.00s teardown test_mysql_database_engine/test.py::test_mysql_types[timestamp_6] 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_predefined_connection_configuration 0.00s teardown test_rename_column/test.py::test_rename_with_parallel_merges 0.00s teardown test_old_versions/test.py::test_distributed_query_initiator_is_older_than_shard 0.00s setup test_rename_column/test.py::test_rename_with_parallel_slow_insert 0.00s setup test_mysql_database_engine/test.py::test_password_leak 0.00s teardown test_old_versions/test.py::test_client_is_older_than_server 0.00s setup test_postgresql_database_engine/test.py::test_postgresql_database_with_schema 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_2 0.00s setup test_polymorphic_parts/test.py::test_polymorphic_parts_non_adaptive 0.00s setup test_mysql_database_engine/test.py::test_predefined_connection_configuration 0.00s setup test_postgresql_database_engine/test.py::test_postgres_database_old_syntax 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_default_columns 0.00s setup test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_1 0.00s setup test_non_default_compression/test.py::test_preconfigured_default_codec 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_replica_consumer 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_too_many_parts 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_materialized_view 0.00s setup test_old_versions/test.py::test_distributed_query_initiator_is_older_than_shard 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_database_with_single_non_default_schema 0.00s teardown test_rename_column/test.py::test_rename_with_parallel_slow_insert 0.00s setup test_s3_low_cardinality_right_border/test.py::test_s3_right_border_3 0.00s teardown test_mysql_database_engine/test.py::test_mysql_types[timestamp_default] 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_dependent_loading 0.00s teardown test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl 0.00s teardown test_postgresql_database_engine/test.py::test_postgresql_fetch_tables 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_toast 0.00s setup test_non_default_compression/test.py::test_uncompressed_cache_plus_zstd_codec 0.00s teardown test_s3_low_cardinality_right_border/test.py::test_s3_right_border_2 0.00s setup test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl 0.00s teardown test_rename_column/test.py::test_rename_distributed_parallel_insert_and_select 0.00s setup test_rename_column/test.py::test_rename_with_parallel_insert 0.00s setup test_rename_column/test.py::test_rename_with_parallel_ttl_delete 0.00s setup test_postgresql_replica_database_engine_2/test.py::test_generated_columns_with_sequence 0.00s setup test_rename_column/test.py::test_rename_parallel 0.00s teardown test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl 0.00s teardown test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables 0.00s teardown test_polymorphic_parts/test.py::test_polymorphic_parts_index 0.00s teardown test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries 0.00s teardown test_rename_column/test.py::test_rename_with_parallel_ttl_delete 0.00s teardown test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-1000-SELECT sum(key) FROM {table_name}] 0.00s teardown test_postgresql_database_engine/test.py::test_postgres_database_old_syntax 0.00s teardown test_non_default_compression/test.py::test_uncompressed_cache_custom_codec =========================== short test summary info ============================ FAILED test_rename_column/test.py::test_rename_distributed - helpers.client.Q... FAILED test_replicated_table_attach/test.py::test_startup_with_small_bg_pool FAILED test_replicated_table_attach/test.py::test_startup_with_small_bg_pool_partitioned FAILED test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_1 FAILED test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_2 FAILED test_postgresql_replica_database_engine_2/test.py::test_database_with_single_non_default_schema FAILED test_postgresql_replica_database_engine_2/test.py::test_default_columns FAILED test_postgresql_replica_database_engine_2/test.py::test_dependent_loading FAILED test_postgresql_replica_database_engine_2/test.py::test_failed_load_from_snapshot FAILED test_postgresql_replica_database_engine_2/test.py::test_generated_columns FAILED test_postgresql_replica_database_engine_2/test.py::test_generated_columns_with_sequence FAILED test_postgresql_replica_database_engine_2/test.py::test_materialized_view FAILED test_postgresql_replica_database_engine_2/test.py::test_predefined_connection_configuration FAILED test_postgresql_replica_database_engine_2/test.py::test_quoting_publication FAILED test_postgresql_replica_database_engine_2/test.py::test_remove_table_from_replication FAILED test_postgresql_replica_database_engine_2/test.py::test_replica_consumer FAILED test_postgresql_replica_database_engine_2/test.py::test_symbols_in_publication_name FAILED test_postgresql_replica_database_engine_2/test.py::test_table_override FAILED test_postgresql_replica_database_engine_2/test.py::test_toast - helper... FAILED test_rename_column/test.py::test_rename_distributed_parallel_insert_and_select FAILED test_rename_column/test.py::test_rename_parallel - helpers.client.Quer... FAILED test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node0-second_node0] FAILED test_rename_column/test.py::test_rename_parallel_same_node - helpers.c... FAILED test_replicated_database_cluster_groups/test.py::test_cluster_groups FAILED test_polymorphic_parts/test.py::test_polymorphic_parts_non_adaptive - ... ERROR test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_1 ERROR test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_2 ERROR test_postgresql_replica_database_engine_2/test.py::test_database_with_single_non_default_schema ERROR test_postgresql_replica_database_engine_2/test.py::test_default_columns ERROR test_postgresql_replica_database_engine_2/test.py::test_dependent_loading ERROR test_postgresql_replica_database_engine_2/test.py::test_failed_load_from_snapshot ERROR test_postgresql_replica_database_engine_2/test.py::test_generated_columns ERROR test_postgresql_replica_database_engine_2/test.py::test_generated_columns_with_sequence ERROR test_postgresql_replica_database_engine_2/test.py::test_materialized_view ERROR test_postgresql_replica_database_engine_2/test.py::test_predefined_connection_configuration ERROR test_postgresql_replica_database_engine_2/test.py::test_quoting_publication ERROR test_postgresql_replica_database_engine_2/test.py::test_remove_table_from_replication ERROR test_postgresql_replica_database_engine_2/test.py::test_replica_consumer ERROR test_postgresql_replica_database_engine_2/test.py::test_symbols_in_publication_name ERROR test_postgresql_replica_database_engine_2/test.py::test_table_override ERROR test_postgresql_replica_database_engine_2/test.py::test_toast - helpers... ERROR test_postgresql_replica_database_engine_2/test.py::test_too_many_parts PASSED test_old_versions/test.py::test_client_is_older_than_server PASSED test_old_versions/test.py::test_distributed_query_initiator_is_older_than_shard PASSED test_old_versions/test.py::test_server_is_older_than_client PASSED test_postgresql_database_engine/test.py::test_datetime PASSED test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays PASSED test_s3_low_cardinality_right_border/test.py::test_s3_right_border PASSED test_s3_low_cardinality_right_border/test.py::test_s3_right_border_2 PASSED test_s3_low_cardinality_right_border/test.py::test_s3_right_border_3 PASSED test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables PASSED test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl PASSED test_postgresql_database_engine/test.py::test_postgres_database_old_syntax PASSED test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries PASSED test_mysql_database_engine/test.py::test_mysql_types[float_2] PASSED test_non_default_compression/test.py::test_preconfigured_custom_codec PASSED test_mysql_database_engine/test.py::test_mysql_types[timestamp_6] PASSED test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_no_proxy PASSED test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache PASSED test_mysql_database_engine/test.py::test_mysql_types[timestamp_default] PASSED test_mysql_database_engine/test.py::test_password_leak PASSED test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_proxy PASSED test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl PASSED test_mysql_database_engine/test.py::test_predefined_connection_configuration PASSED test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list PASSED test_postgresql_database_engine/test.py::test_postgresql_database_with_schema PASSED test_postgresql_database_engine/test.py::test_postgresql_fetch_tables PASSED test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list_no_proxy PASSED test_postgresql_database_engine/test.py::test_postgresql_password_leak PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-1000-SELECT sum(key) FROM {table_name}] PASSED test_parallel_replicas_skip_shards/test.py::test_error_on_unavailable_shards PASSED test_parallel_replicas_skip_shards/test.py::test_skip_unavailable_shards PASSED test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy PASSED test_postgresql_database_engine/test.py::test_predefined_connection_configuration PASSED test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy_no_proxy PASSED test_parallel_replicas_failover/test.py::test_skip_replicas_without_table PASSED test_postgresql_replica_database_engine_2/test.py::test_add_new_table_to_replication PASSED test_parallel_replicas_failover/test.py::test_skip_unresponsive_replicas PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-10000-SELECT sum(key) FROM {table_name}] PASSED test_search_orphaned_parts/test.py::test_search_orphaned_parts[False] PASSED test_postgresql_replica_database_engine_2/test.py::test_bad_connection_options PASSED test_polymorphic_parts/test.py::test_compact_parts_only PASSED test_optimize_on_insert/test.py::test_empty_parts_optimize PASSED test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_client_suggestions_connection PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-100000-SELECT sum(key) FROM {table_name}] PASSED test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_compact-Compact] PASSED test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_wide-Wide] PASSED test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop PASSED test_rocksdb_read_only/test.py::test_read_only PASSED test_remote_prewhere/test.py::test_remote PASSED test_range_hashed_dictionary_types/test.py::test_range_hashed_dict PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-1000-SELECT sum(key) FROM {table_name}] PASSED test_search_orphaned_parts/test.py::test_search_orphaned_parts[True] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-10000-SELECT sum(key) FROM {table_name}] PASSED test_select_access_rights/test_from_system_tables.py::test_information_schema PASSED test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable PASSED test_non_default_compression/test.py::test_preconfigured_default_codec PASSED test_profile_events_s3/test.py::test_profile_events PASSED test_postgresql_replica_database_engine_2/test.py::test_too_many_parts PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-100000-SELECT sum(key) FROM {table_name}] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-1000-SELECT sum(key) FROM {table_name}] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-10000-SELECT sum(key) FROM {table_name}] PASSED test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node1-second_node1] PASSED test_polymorphic_parts/test.py::test_polymorphic_parts_index PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-100000-SELECT sum(key) FROM {table_name}] PASSED test_non_default_compression/test.py::test_preconfigured_deflateqpl_codec PASSED test_non_default_compression/test.py::test_uncompressed_cache_custom_codec PASSED test_non_default_compression/test.py::test_uncompressed_cache_plus_zstd_codec PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-1000-SELECT sum(key) FROM {table_name}] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-10000-SELECT sum(key) FROM {table_name}] PASSED test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-100000-SELECT sum(key) FROM {table_name}] PASSED test_rename_column/test.py::test_rename_with_parallel_insert PASSED test_rename_column/test.py::test_rename_with_parallel_merges PASSED test_rename_column/test.py::test_rename_with_parallel_select PASSED test_rename_column/test.py::test_rename_with_parallel_slow_insert PASSED test_rename_column/test.py::test_rename_with_parallel_ttl_delete PASSED test_rename_column/test.py::test_rename_with_parallel_ttl_move ============ 25 failed, 75 passed, 17 errors in 1916.98s (0:31:56) ============= Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 445, in subprocess.check_call(cmd, shell=True) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_tfur6q --privileged --dns-search='.' --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=2cffe1eae894 -e DOCKER_BASE_TAG=1e0b53d756cf -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e CLICKHOUSE_USE_OLD_ANALYZER=1 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 'test_mysql_database_engine/test.py::test_mysql_types[float_2]' 'test_mysql_database_engine/test.py::test_mysql_types[timestamp_6]' 'test_mysql_database_engine/test.py::test_mysql_types[timestamp_default]' test_mysql_database_engine/test.py::test_password_leak test_mysql_database_engine/test.py::test_predefined_connection_configuration test_non_default_compression/test.py::test_preconfigured_custom_codec test_non_default_compression/test.py::test_preconfigured_default_codec test_non_default_compression/test.py::test_preconfigured_deflateqpl_codec test_non_default_compression/test.py::test_uncompressed_cache_custom_codec test_non_default_compression/test.py::test_uncompressed_cache_plus_zstd_codec test_old_versions/test.py::test_client_is_older_than_server test_old_versions/test.py::test_distributed_query_initiator_is_older_than_shard test_old_versions/test.py::test_server_is_older_than_client test_optimize_on_insert/test.py::test_empty_parts_optimize test_parallel_replicas_failover/test.py::test_skip_replicas_without_table test_parallel_replicas_failover/test.py::test_skip_unresponsive_replicas 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-1000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-10000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-101-100000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-1000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-10000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[1-11-100000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-1000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-10000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-101-100000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-1000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-10000-SELECT sum(key) FROM {table_name}]' 'test_parallel_replicas_invisible_parts/test.py::test_reading_with_invisible_parts[11-11-100000-SELECT sum(key) FROM {table_name}]' test_parallel_replicas_skip_shards/test.py::test_error_on_unavailable_shards test_parallel_replicas_skip_shards/test.py::test_skip_unavailable_shards test_polymorphic_parts/test.py::test_compact_parts_only 'test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_compact-Compact]' 'test_polymorphic_parts/test.py::test_different_part_types_on_replicas[polymorphic_table_wide-Wide]' 'test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node0-second_node0]' 'test_polymorphic_parts/test.py::test_polymorphic_parts_basics[first_node1-second_node1]' test_polymorphic_parts/test.py::test_polymorphic_parts_index test_polymorphic_parts/test.py::test_polymorphic_parts_non_adaptive test_postgresql_database_engine/test.py::test_datetime test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl test_postgresql_database_engine/test.py::test_postgres_database_old_syntax test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl test_postgresql_database_engine/test.py::test_postgresql_database_with_schema test_postgresql_database_engine/test.py::test_postgresql_fetch_tables test_postgresql_database_engine/test.py::test_postgresql_password_leak test_postgresql_database_engine/test.py::test_predefined_connection_configuration test_postgresql_replica_database_engine_2/test.py::test_add_new_table_to_replication test_postgresql_replica_database_engine_2/test.py::test_bad_connection_options test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_1 test_postgresql_replica_database_engine_2/test.py::test_database_with_multiple_non_default_schemas_2 test_postgresql_replica_database_engine_2/test.py::test_database_with_single_non_default_schema test_postgresql_replica_database_engine_2/test.py::test_default_columns test_postgresql_replica_database_engine_2/test.py::test_dependent_loading test_postgresql_replica_database_engine_2/test.py::test_failed_load_from_snapshot test_postgresql_replica_database_engine_2/test.py::test_generated_columns test_postgresql_replica_database_engine_2/test.py::test_generated_columns_with_sequence test_postgresql_replica_database_engine_2/test.py::test_materialized_view test_postgresql_replica_database_engine_2/test.py::test_predefined_connection_configuration test_postgresql_replica_database_engine_2/test.py::test_quoting_publication test_postgresql_replica_database_engine_2/test.py::test_remove_table_from_replication test_postgresql_replica_database_engine_2/test.py::test_replica_consumer test_postgresql_replica_database_engine_2/test.py::test_symbols_in_publication_name test_postgresql_replica_database_engine_2/test.py::test_table_override test_postgresql_replica_database_engine_2/test.py::test_toast test_postgresql_replica_database_engine_2/test.py::test_too_many_parts test_profile_events_s3/test.py::test_profile_events test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_client_suggestions_connection test_range_hashed_dictionary_types/test.py::test_range_hashed_dict test_remote_prewhere/test.py::test_remote test_rename_column/test.py::test_rename_distributed test_rename_column/test.py::test_rename_distributed_parallel_insert_and_select test_rename_column/test.py::test_rename_parallel test_rename_column/test.py::test_rename_parallel_same_node test_rename_column/test.py::test_rename_with_parallel_insert test_rename_column/test.py::test_rename_with_parallel_merges test_rename_column/test.py::test_rename_with_parallel_select test_rename_column/test.py::test_rename_with_parallel_slow_insert test_rename_column/test.py::test_rename_with_parallel_ttl_delete test_rename_column/test.py::test_rename_with_parallel_ttl_move test_replicated_database_cluster_groups/test.py::test_cluster_groups test_replicated_table_attach/test.py::test_startup_with_small_bg_pool test_replicated_table_attach/test.py::test_startup_with_small_bg_pool_partitioned test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop test_rocksdb_read_only/test.py::test_read_only test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable test_s3_low_cardinality_right_border/test.py::test_s3_right_border test_s3_low_cardinality_right_border/test.py::test_s3_right_border_2 test_s3_low_cardinality_right_border/test.py::test_s3_right_border_3 test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_no_proxy test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_env_proxy test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_proxy_list_no_proxy test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy test_s3_table_function_with_http_proxy/test.py::test_s3_with_http_remote_proxy_no_proxy 'test_search_orphaned_parts/test.py::test_search_orphaned_parts[False]' 'test_search_orphaned_parts/test.py::test_search_orphaned_parts[True]' test_select_access_rights/test_from_system_tables.py::test_information_schema -vvv" altinityinfra/integration-tests-runner:37a9815fd2fa ' returned non-zero exit status 1.